September 30, 2025 XR Creators

Getting your Trinity Audio player ready...

Hi Mike, thanks so much for joining us! To get started, can you please give us an overview of your background

My interest in technology started very early. Back in high school, a buddy and I would find scrap computers that people were throwing out in our neighborhood in Windsor, Ontario. We’d grab these discarded towers from beside the trash and start pulling them apart, sticking RAM in different places, finding processors, and experimenting. Eventually, we managed to boot up our first PC and even found a way to install Windows 95 on it. This was my first hands-on experience with technology – literally building something functional from parts others had discarded.

This DIY approach eventually led me to architecture. Looking back, I can see the common thread between building computers and designing buildings – both involve taking individual components and assembling them into functional systems. With computers, it was hardware components creating a working machine; with architecture, it’s design elements creating a livable space. I’ve loved architecture since I can remember. I’ve been designing and creating things for a very long time, and architecture was just a natural fit for me. It was obvious this was the path I should take because I see it as one of the great art forms where you create something and it gets built. It’s permanent – it’ll certainly outlive you, if it’s not demolished or burnt down in a fire.

I graduated from architecture at St. Clair College in Windsor, Ontario in 2010 and worked as an architectural designer for a number of years. After graduation, I co-founded an architectural design firm called Siren in Windsor before expanding my career path. In the firms, my work focused on architectural design, graphic design, client presentations, and 3D modeling – a whole array of skills that eventually grew and transformed into using Unreal Engine and ultimately led me down this path. My background starts with architecture and having a solid foundation in that field.

 

What inspired you to get into immersive tech? Please tell us a bit about your journey to getting into the XR industry.

In architecture, going back to around 2013-14, the firms I worked in were all using 3D when designing buildings. We were among the first to bring 3D modeling into architecture firms, using tools like Revit, SketchUp, Maya, and 3Ds Max. Maya and Max are particularly valuable because they blend nicely in creating 3D environments that can transfer into Unity or Unreal.

Around that time, we started exploring architectural rendering engines that allowed us to navigate through spaces in a game-like fashion. Traditional CAD is quite boring and strictly professional. The industry was resistant to anything that resembled gaming, but the potential was clearly there. I recognized that we should be able to walk through architectural designs just like we navigate through video games.

When I moved to Vancouver in 2014, I joined a progressive architecture firm that was constantly looking for innovative ways to communicate projects to clients. This environment really opened the door to exploring immersive technologies.

By 2016, I had founded a startup called 3DFX in Windsor. That’s where we truly entered the XR space. We were creating 3D renderings and animations while exploring Unreal Engine. We were working on importing CAD models into Unreal when the Oculus DK2 was just coming out. An advisor in the Windsor tech ecosystem who knew about our work called me and offered an Oculus DK2, which I immediately accepted.

At that time, Unreal wasn’t perfectly integrated with VR as it is today, so we had to do considerable troubleshooting to make it work. But when we finally got it running, it was one of the most incredible experiences. We went from staring at a computer screen to actually jumping into our project through the VR headset. Suddenly we were standing in the living room of our design. It was truly a momentous occasion – a genuine game changer.

We were so excited that we issued a press release and invited the Windsor Star, the local newspaper, to cover this new technology. We organized a full week of demo days where architects, commercial developers, and builders could visit our VR demo room to experience a virtual condo. The response was tremendous and helped establish us as innovators in the space.

 

Can you share more about the exciting projects you’re currently working on?

Fast forwarding to our current projects, we have Geopogo Cities, which is an Unreal Engine desktop-based application we built for architects and urban planners. We provide them with 3D city data for locations around the planet and the ability to import their CAD and design models to place building designs within actual city contexts. It’s essentially SimCity for real cities.

If an architect is designing a tower in downtown San Francisco, they can bring in their design model, position it in the virtual city, and see the views from outside. They can also go inside and see the views out the windows – for example, from the top of a building like Salesforce Tower, where the views are spectacular. For developers who want to sell those views, instead of waiting for construction, they can immediately capture renderings, experience them in VR, and use them for marketing.

AI integration was a natural next step for us, as it is for virtually every tech company now. I specifically wanted to create a web-based program that would be functional rather than just an explainer site that gets you to subscribe or download something.

Over the summer, we had been using OpenAI‘s API for image-to-image generation. This is particularly valuable because while we use Google Maps API in Unreal Engine to get city data, which looks good from above, at ground level they’re just satellite LIDAR scans that don’t capture perfect reality. We use AI to enhance or replace those details.

However, OpenAI‘s API had issues with hallucination. When it reimagined images, it would often alter the architect’s original design, which is problematic. Just a few weeks ago – and this demonstrates how rapidly AI is evolving – Google released Gemini with an API called Nano Banana. I’m not sure why they chose that name, but crucially, it doesn’t hallucinate.

We built a program with this API where you can upload a screenshot of your CAD file from Revit or SketchUp, unrendered, and simply type how you want it transformed – enhanced realism, added people, different lighting, etc. What’s remarkable is that it preserves the integrity of the original design. For architecture, this is absolutely paramount – you want the windows, doors, roof, and all design elements to remain unchanged while enhancing the visual presentation.

We’ve also developed ZoneQuest, an AI-powered tool that instantly retrieves zoning information based on property addresses. It provides users – including real estate brokers, architectural designers, builders, and property owners – with detailed insights into what can be built on a specific property using generative AI. This significantly reduces research time and helps prevent costly mistakes early in the design process.

Our team implemented these solutions, and they’re now live on geopogo.com . You just click on AI, upload any image, and start experimenting. You can make unlimited edits, and even create animated videos with the results.

What are some of the most significant challenges you’ve faced in bringing AR/3D visualization to the architecture and urban planning industry? How have you tackled these obstacles?

The XR community as a whole has been significantly challenged with adoption. I’ve had considerable time to reflect on what happened there, particularly within the architecture niche.

Over the past decade, we’ve seen an array of headsets – from Oculus DK2 to HTC Vive, Magic Leap One, HoloLens, Magic Leap Two, HoloLens Two, Meta Quest devices, and more. We’ve continuously adapted and built for these platforms, always trying to get closer to that ultimate XR promise.

I believe there’s a fundamental disconnect in how different groups perceive this technology. There are people like us in the AWE (Augmented World Expo) community who intuitively understand and love XR. For some reason, we immediately grasped its potential from the moment we discovered it. We instinctively knew what we wanted from it and could envision its applications.

What we in the XR community failed to recognize is that there are people who don’t share this intuitive understanding. They don’t possess that pre-knowledge or sense of wonder about the technology that we somehow instinctually had. This created a significant gap between visionaries and consumers that has been difficult to bridge.

The hardware limitations compounded this problem. Early headsets had narrow fields of view, tracking issues, and comfort problems that made extended use challenging. We encountered clients who couldn’t even figure out how to put on a Magic Leap headset, let alone navigate a complex 3D environment. The learning curve was steep, and without an immediately obvious benefit that outweighed these challenges, many potential users simply walked away.

On the point of this technological divide, you have groups like us who immediately recognize the potential. We think, “I know exactly what this technology is and what I’m going to do with it. I understand how to apply it to my business or vision of the future.” Then you have the other group who simply asks, “What is this? How will it benefit me?”

One of our central challenges was this knowledge gap. We knew what to do with the technology, but consumers didn’t. What makes XR particularly challenging is that both groups demand complete or “pure” versions of the technology – the ultimate VR or AR experience. What I experienced over the years, moving from headset to headset, was that each new release offered incremental improvements. Maybe it had a wider field of view, outdoor capability with dynamic dimming like Magic Leap, or a more comfortable form factor. But it was never quite what it needed to be. Everyone always wanted something more from it.

Over the years, we’ve had numerous successful AR and VR projects. We’ve created compelling experiences and released products for both platforms. What’s particularly interesting about the architecture niche is how slow they are to adopt change. They’re not a group that naturally embraces new technology, which we didn’t fully appreciate at first.

The standard entrepreneurial approach is to identify needs, solve problems, develop better technology than what currently exists, and release it to market with hopes of success. Reality, of course, works differently. We found success with individual XR projects but struggled with broader market adoption. There always seemed to be a ceiling on the market’s appetite for new technology.

I constantly questioned why architectural firms could have tremendous success with a client using augmented reality but wouldn’t think to apply it to all their projects. Eventually, I understood what was happening: firms in architecture only adopt new technology when a specific client or project demands it. They don’t apply innovations across their practice because their budgets and margins are so thin that implementing technology without client demand would actually cost them money. Without client pressure, there’s simply no incentive to change established workflows.

 

You’ve been a key figure in the recent push for AI-powered architectural visualization. What inspired you to take on this project, and what do you hope it will achieve for the architecture and urban planning community?

We’re currently working on a fascinating project with a client who’s purchasing a property for renovation. The client created some impressive sketches to express his vision for the property. This scenario illustrates a common situation: a savvy client sees a property, has a clear vision for it, and attempts to use conventional tools to express that vision. Typically, the client can communicate a percentage of their concept, but not completely – which is why they need designers and architects to professionalize it, apply building codes, and craft the final vision.

This traditional process supports an entire industry. However, I believe that within five years, AI will transform this completely. A client will be able to take pictures of a property with their phone, upload them to an AI app, and simply describe their vision: “I want three bedrooms, a garage, a pool,” and so on. There’s no reason why AI couldn’t return not just photorealistic renderings of that vision but also basic professional floor plans that could theoretically be submitted for approval.

There’s a status quo mentality in architecture and urban planning that says, “We’ve done a good job, though we can always improve.” But sometimes you walk around cities and think, “Is this really the best we can do?” San Francisco might be an exception, but look at many cities across the Midwest or even parts of Oakland in the Bay Area, and you have to question if this represents our best work.

My fundamental belief is that ideas aren’t owned by institutions. The architectural and urban planning establishment shouldn’t claim exclusive ownership of the best ideas for our built environment. Every person on this planet has ideas that are equally valuable, if not more so – especially when communities collaborate. What ordinary people lack is access to the means of production – the ability to create the technical drawings and documentation that can turn ideas into reality. That’s the gap that exists right now.

We all can envision spaces we want or desire, but most of us can’t produce floor plans or translate those visions into reality. AI will be a powerful tool to bridge this gap, democratizing the design process and giving more people the ability to create.

I’ve seen firsthand how visualization technology can empower communities. In 2019, I returned to Windsor and used augmented reality to show residents what the South Cameron woodlot would look like if developed after its provincially significant wetland designation was removed. By allowing people to stand in the actual woodlot and see potential houses, driveways, and roads through Magic Leap headsets, they could make informed decisions about whether this was the future they wanted for their community. This kind of civic engagement through technology is exactly what I hope AI and AR will enable on a broader scale.

As AI becomes more sophisticated in design and planning, what aspects of architecture do you think should always remain uniquely human? Where do you draw the line between AI assistance and AI decision-making?

That’s an incredibly challenging question. There will undoubtedly be philosophical debates about this as AI continues to evolve.

Let’s take a step back for perspective. In architecture, product design, and furniture design, there are millions of designers worldwide creating things every day. Most aren’t famous – we tend to focus on renowned figures like Leonardo da Vinci or celebrated architects. We hold these exceptional individuals as the pinnacle of creativity while rarely acknowledging the work of average designers.

The truth is that AI will soon be able to produce designs of equal or better quality than the average professional designer. There will always be exceptional human creators – the “starchitects” as they’re called in architecture – who can produce truly original work. But for everyday design needs in society, AI will become a tool that can match or exceed typical human capabilities.

Regarding the distinction between human and AI design, I believe human design matters most when it’s personal. Consider receiving a birthday card – would you prefer a mass-produced Hallmark card with a signature, or a card handmade specifically for you?

For mass-produced items, the personal element becomes less significant. Society generally doesn’t care whether a human spent hundreds of hours designing a piece of furniture in their studio. If it’s being mass-produced at IKEA, consumers just want it for $59.99, easy assembly, and functional placement in their home. They’re indifferent to the story behind it unless that narrative enhances the marketing appeal.

Where human design will remain essential is in one-on-one relationships – creating something for someone you care about. An AI-generated gift lacks the emotional investment that makes personal creations meaningful. I anticipate we’ll see AI-generated design dominate mass production and commercial applications, while the personal, human aspects of design will continue to be relationship-based.

For architects specifically, I believe their role will evolve rather than disappear. Instead of spending hours on technical drawings and renderings, they’ll focus more on the conceptual, emotional, and human-centered aspects of design. They’ll become curators and directors of AI-generated possibilities, applying their expertise to select and refine the most appropriate solutions. The architect’s value will increasingly be in their ability to understand human needs, navigate complex social and environmental contexts, and make judgment calls that AI simply cannot make.

What aspects of the XR industry do you believe need to evolve or change? Why?

I believe we’ve reached the end of the headset introduction cycle. Apple effectively concluded this phase when they released the Vision Pro. Once Apple entered the market with their version of what was already available, it sealed the coffin on public appetite for new VR headsets.

What happened was revealing: Apple entered the game with a product that was only incrementally better than existing options – perhaps 1.5 times better than a Meta Quest 3 – but at a dramatically higher price point of $3,400 compared to around $599 for the Quest. The market showed curiosity but ultimately responded with skepticism.

This reflects VR’s persistent challenge outside of gaming applications: the technology needs to offer something significantly better than existing solutions. We know from studying technology adoption that incremental improvements don’t drive mass adoption, especially at premium price points. The improvement must be transformative – like what the iPhone was to flip phones. The Apple Vision Pro simply didn’t deliver that level of revolutionary change.

With this realization, I believe the standalone headset market has concluded its innovation cycle. The industry is now waiting for the augmented AI glasses market to begin in earnest, as evidenced by Mark Zuckerberg’s recent showcases with Ray-Ban.

In retrospect, I think we pushed too hard and too quickly with Unity, Unreal Engine, and immersive 3D experiences. The market response was essentially, “Slow down – we just need notifications in our glasses, not an entire 3D world.” The industry was trying to run before it could walk.

What we’ll likely see next is smart glasses with AI-embedded functionality delivering simple notifications. Instead of checking your phone, you’ll glance through your glasses for messages, social media updates, or directions. From there, we’ll gradually push boundaries toward more complex applications – from notifications to web-based applications, and eventually back to immersive 3D experiences, probably in the late 2020s or 2030s.

 

You’re disrupting traditional architecture workflows now, but what emerging technology do you think might disrupt what you’re building? Are you preparing for your own obsolescence, and if so, how?

It’s already happened. The disruption isn’t coming – it’s here. It’s somewhat unsettling to reflect on this. Just a couple of years ago, clients would approach us needing architectural renderings quickly. We’d prepare proposals for several thousand dollars – perhaps four or five grand for a basic package, with additional services priced accordingly. This business model supported our team, paid the bills, and provided livelihoods.

Fast forward to today, and the very technology we’re creating has essentially eliminated that revenue stream entirely. What’s worse, it’s reduced it to an inexpensive subscription service, yet clients still expect more features and capabilities. I sometimes wonder if we fully understand the trap we’re setting for ourselves.

Four years ago, I could generate $5,000-10,000 from a quick rendering project, applying artistic talent and technical skill to create something valuable. This supported not just me but a team of people, including those who brought in the business. Today, that entire revenue model has vanished. Instead of paying thousands of dollars for custom work, clients now pay us $19.99 a month for access to our tools.

The irony is that we had no choice but to adapt. If we hadn’t developed these AI tools ourselves, clients would simply tell us, “Why should I pay you thousands when this AI tool lets me do it myself for practically nothing?”

Looking at Geopogo AI from a business perspective – and having been in this industry for a decade – I sometimes question how much traditional project work we actually want to do. There’s a fundamental shift happening from creating bespoke projects to developing tools that people worldwide can use independently while paying us a subscription fee. Our primary job becomes marketing rather than production.

It represents a complete perspective shift – almost a “stop working” mentality. Instead of continuously creating Unreal or Unity projects, we’re focusing on getting people to join and use our platform. In a way, this was always the end goal for many of us in the industry – creating tools that scale beyond our individual capacity to deliver services.

Tell me about a time when an AR visualization or AI feature you developed completely missed the mark with users. What went wrong, what did you learn, and how did it change your approach to product development?

Reflecting on our AR development experience, particularly with Magic Leap, I think our biggest mistake was falling into what I call “perpetual engineering.” This is when a product never stops being built – it’s constantly under development. This is perhaps the most dangerous situation in entrepreneurship because it requires your engineering team to continuously build, upgrade, and improve at a pace that inevitably outstrips market adoption.

When we entered the augmented reality space for architecture, we had one beautifully simple goal: create the ultimate architectural presentation by taking clients to the actual construction site, projecting their building onto that location, and letting them walk around and experience it. That’s it. That’s magical.

The vision was compelling, but we didn’t account for human factors. People come with fears, anxieties, personal ambitions, and complex emotions. What surprised us was that architects were often afraid to let clients freely explore designs in AR. They feared clients might discover something they didn’t like – even though identifying issues virtually could potentially save hundreds of thousands of dollars in change orders or construction modifications.

It’s like when you visit friends who’ve renovated their home – they almost always say, “We would have done this differently if we’d known how it would turn out.” That’s exactly the problem AR could solve, but professional resistance limited its adoption.

Looking back, I wish I had known how to create an extremely simple product and market it to the right audience. The ideal audience would have been homeowners building new homes or renovating kitchens and bathrooms. Their value proposition is straightforward – they just want to see their project before committing to decisions. That’s it.

Selling to architects and contractors introduced complexities that the early headsets simply couldn’t handle. However, the best decision I made during this period was declining an airport project. When I received the RFP and reviewed their requirements, I recognized that the technical complexities exceeded what our headsets could deliver. It came down to patience versus greed. The financial opportunity was tempting, but I knew the technology wouldn’t perform adequately.

I had to decide: did I want my team to struggle with an inevitably disappointed client, or should I simply decline the project altogether? I chose to walk away. I didn’t even engage with the opportunity. This might seem counterintuitive, but projects of that scale consume six months of your life, and sometimes it’s better to avoid problematic situations entirely. Better opportunities eventually emerge to replace those you decline.

I’ve found – and this is challenging for entrepreneurs to accept – that you’re often more successful because of the projects you turn down rather than those you accept.

Imagine it’s 2030 and AR/AI in architecture has evolved exactly as you hope. What does a typical day look like for an architect using these tools? What problems that exist today have been completely eliminated?

Let me paint a vision of what I think would be the perfect day in 2031, when AR/AI has evolved exactly as I hope.

It’s September 24, 2031, and Geopogo is thriving. That summer, a revolutionary new generation of AR glasses was released that finally delivers everything we’ve been wanting. These glasses can project building designs with photorealistic quality, and they incorporate intuitive editing tools that respond to natural hand gestures – no controllers needed. You can simply reach out, grab a window or wall element, and move it with your fingers.

The experience is even more immersive than that. You could walk onto an empty lot and begin designing a house right there on site using just your hands. You’d place walls, doors, windows, floors, and roofs through intuitive gestures. The system would save everything as a 3D model that you could then share with your architect or builder, or access later through a web application on your phone.

Imagine a client presentation day where an architectural firm brings their entire team and client group to the site. Everyone puts on these glasses, which immediately and precisely project the building exactly where it’s meant to be constructed. There’s no technical fiddling – no need to move, scale, or rotate the model. Everyone sees the same building in the same place, rendered photo realistically and anchored perfectly to the environment.

The truly transformative aspect is that everyone can walk through the space together, seeing not just the building but also each other moving through the virtual design. The architect can guide the entire group through what has become the ultimate architectural presentation. No more relying on storyboards or asking clients to imagine spaces – because clients never truly imagine what you describe anyway. Instead, everyone experiences the architect’s vision exactly as intended.

At the culmination of the presentation, the architect leads the group to a vista point where they can all take in the new building within its landscape context. The clients are so impressed they applaud, the architect becomes a superstar in their eyes, and Geopogo gets paid for providing the technology that made it all possible.

 

In your opinion, what are some key considerations for privacy and ethical development within XR?

Regarding privacy, I believe it’s fundamentally important that users can expect their devices to be private – meaning no company is monitoring their activities. At Geopogo, privacy is built into our company philosophy. We deliberately don’t track what users do with our tools; it’s simply not our concern.

To illustrate why this matters, we were once invited to a headset release event by one of the major companies in the space. During the presentation, they proudly showcased their new eye-tracking technology, explaining how it could monitor where users look to optimize advertising, analyze time spent on applications, and identify what captures attention. The entire XR community at that presentation was visibly uncomfortable with this approach because it felt invasive.

While data collection can certainly help deliver better experiences, I fundamentally believe in creating excellent products for specific use cases and then letting people use them however they want. Why should we care exactly how they’re using our tools? The priority should be making something genuinely useful rather than monitoring usage patterns. At Geopogo, we don’t know what our users do with our platform, and I prefer to keep it that way, as long as it’s legal.

The ethics of XR is a more complex question. I don’t have many personal examples of ethical boundaries being crossed in XR development, but I’ve often thought about its potential as an empathy tool.

One powerful application could be using VR to educate the public by allowing them to experience events from the perspective of someone living through tragedy – for example, war. I believe we have such emotional disconnection from conflicts in places like Gaza or Ukraine because of how information is presented to us. We hear news reports stating “a drone strike killed 100 people, including 22 children,” but these statistics become abstract and we grow numb to them over time.

What if instead, you could actually witness what a drone strike is like from the perspective of someone experiencing it, or even from a child’s viewpoint? This raises ethical questions about representation and trauma, but could such experiences help people truly understand the terror of war? Could this deeper understanding potentially help end conflicts sooner by creating genuine empathy? These are the kinds of ethical questions we should be exploring as technology evolves.

As XR technology becomes more widespread, I believe we’ll need thoughtful regulatory frameworks that balance innovation with protection. The question of who owns the data collected by XR devices – especially biometric data like eye tracking, movement patterns, and physiological responses – will become increasingly important. I think the industry should proactively develop ethical standards rather than waiting for government regulation, which often struggles to keep pace with technological advancement. Companies that prioritize user privacy and ethical considerations will ultimately build more trust and likely see better long-term adoption.

For individuals looking to enter the XR space (entrepreneurs, artists, students), what advice would you offer to help them succeed?

Number one, build something. If you’re going to release something into the world, do it as quickly as possible. Build it fast, whatever it is, and get it onto the market immediately. So many people wait – they delay creating their social media presence, publishing materials, launching their website. They’re constantly waiting for everything to be perfect.

Instead, just do it from day one. In fact, the more raw and authentic your initial release looks, the better chance you have of building a following. Release something immediately – even if it’s just a LinkedIn page that says, “Hey, I’m so-and-so, and this is my idea for the world.” That’s enough to get started.

My second piece of advice is to avoid lengthy engineering challenges without proper funding. If you’re creating a startup and the first thing you need to do is build a product that will take six to eight months to develop, you don’t have a business – you have a project. Unless you have secured funding specifically for that development period, walk away.

If you want to generate revenue, you need to release something immediately that can bring in money. Without revenue, you have nothing. Passion and hope can sustain you for a while, but the last thing you want is for that hope to turn sour, which happens when too much time passes while you’re struggling to monetize a passion project.

Finally, you must honestly ask yourself why you’re doing this. This is perhaps the hardest question, but it’s essential. Your answer might be straightforward, but as long as you have one, that’s good enough. You just need to have an answer.

Who are some of the key mentors who have guided your journey? What impact have they had on you?

Dave, the CEO of Geopogo, has certainly been my biggest mentor by far. He guided me in many significant ways since I joined the company in Berkeley after my time in Windsor and Vancouver. First and foremost, he trusted everyone who joined the Geopogo team to do what they thought was best, which is remarkably rare in leadership. Many CEOs take the opposite approach, but Dave came in with the attitude of, “I’ve created this opportunity. Show me what ideas you have. Show me what you can do. I’ve built the platform, now you excel with it, and I’ll guide you along the way.”

He also taught us to genuinely care about people. It sounds simple, but it’s profound. In conversations, whether at events or meetings, Dave demonstrated how to be truly interested in others. We’ve all met people who ask thoughtful questions because they genuinely want to know about you, versus those who know nothing about you but talk exclusively about themselves. Dave exemplified the former approach, showing how authentic interest in others builds stronger relationships and better business outcomes.

What’s your favorite inspirational quote? What about the quote inspires you?

A really good friend of mine buddy used to say this quote from a Gucci Mane song: “I do not expect anybody to do for me what I won’t do for me.”

What that meant was you don’t ever wait for anybody to do something for you that you could do for yourself. In our early days of entrepreneurship, we were at an accelerator where everyone was hustling. Everyone was going through tough times because our town wasn’t conducive to entrepreneurship.

Our dream was to make it out. We lived on grinding, on hustling, on finding opportunity from nowhere, on the whole concept of making opportunities from thin air. That particular saying meant to us that no one’s going to do it for you. No one’s going to hold your hand. No one’s going to help you. If you want something to happen, you have to make it happen yourself, even if you don’t do it perfectly. You still tried. It’s the hustle. That’s the realest quote I’ve got.

 

Find Michael on LinkedIn and learn more about his company at Geopogo

Know someone who should be interviewed for an XR Creator Spotlight? Please email us at hello@xrcreators.org.

Foundations for an Inclusive XR Startup (click on image)

Translate »