Please give us an overview of your background, what inspired you to get into tech, and your journey into the XR/ BCI industry.
Thanks for having me! I’m Michele from Italy, and my journey into tech started pretty early. I was always the kid fascinated by new gadgets – from Walkman’s to CD players – and, as many can relate, I became the reference for tech support in my family.
It all started when I accidentally deleted the entire file system on our brand new Windows XP computer. My father was freaking out, and blaming me for it; I spent an entire day figuring it out, using CDs with manuals and tutorials to reinstall everything. It was a hassle, but that’s how I began setting up and later on building my own PCs.
When it came time for university, I chose computer science, partly for the career prospects and partly because I liked the idea of working with computers. But after my bachelor’s, I realized pure computer science wasn’t the best fit for me..
The real turning point came during my master’s in human-computer interaction. I did a double specialization – one in situated interaction, which covers VR, AR, and tangible interfaces, and another in machine learning and intelligent systems. This is when I really got to play around and have fun, sometimes combining different technologies like AR and tangible interfaces in a single project.
It was during this time that I discovered brain-computer interfaces (BCI), which became my main passion and career focus. Back then I was playing around with virtual environments, and I started imagining what could be achieved by including the “brain dimension” into the equation. That’s when I realized the potential of BCI in the broader context of extended reality (XR).
What’s interesting is that I come from a family of humanities folks – teachers and professors of Latin, history, and art. My father did chemistry, but that’s about as close to tech as my family got. So, I was really pioneering this field in my family. Even now, my parents try to understand what I’m working on, but it’s pretty challenging for them to grasp these concepts.
And that’s how I ended up here, fully immersed in the world of XR and BCI. I’m working on developing more intuitive interfaces that blend brain signals with other sensory inputs in extended reality environments. Lately, my focus is on video games, the perfect context for experimenting with novel interactions. I’m excited to see how these technologies will reshape our interaction with computers and the world around us. It’s a far cry from fixing family computers, but in a way, my goal remains the same: to make technology more accessible and user-friendly, just on a much more advanced level.
Have you ever worked on a project that involved brain-computer interfaces (BCI)? What were some of the unique challenges you encountered, and how did you address them?
Yes, I’ve worked on several BCI projects, primarily focusing on EEG. Working with technology from the medical domain presents numerous challenges, but I’ll focus on two major ones I frequently encounter: ergonomics and user experience in neuro games.
First, ergonomics. When companies develop new BCI devices, they naturally focus on functionality – it has to work, and work well. Ergonomics often comes later, especially for startups or small companies that can’t invest heavily in designing for comfort. I’ve had cases where the device became uncomfortable for users after just 15 minutes.
To address this, we had to design experiences to be shorter or include breaks. But in scientific data collection, you can’t simply remove the hardware without creating biases. So we implemented software breaks, allowing the system to pause without disrupting data collection. We also developed small cushions to place under the electrodes, maintaining their position while providing relief from discomfort. It’s about finding creative solutions to keep the experiment going without breaking the flow or compromising data integrity.
The second challenge is specific to neuro games – games you control with your brain. Calibrating difficulty is tricky because there’s a latency between thought and action. Unlike clicking a button, you don’t get immediate feedback. Users have to learn this delay, which varies between individuals.
The challenge is balancing the game mechanics to be neither too slow (boring) nor too fast (frustrating). This essentially allows the player to remain in the “flow channel”, for those who are familiar with the theories of Csikszentmihalyi. It requires a lot of fine-tuning and adapting based on individual players. This is particularly challenging but also exciting – we’re designing interactions that have never existed before.
Another interesting aspect of BCI technology is the variety of electrode types and placements. For instance, we have devices like the Unicorn that use hybrid electrodes, which can work with or without gel. There’s always a trade-off between signal quality and user comfort. Gold-plated dry electrodes provide excellent signals but are expensive and often not so comfortable, while active gel electrodes give the cleanest data but are less convenient to set up.
The positioning of electrodes is crucial too. While you can theoretically get brain signals from anywhere on the head, certain areas provide clearer information for specific functions. For example, in-ear and over-ear EEG devices are becoming popular, but they might struggle to pick up clear signals from the motor or visual cortex.
Looking to the future, there’s exciting work being done in combining BCI with other technologies. Meta’s wrist-based neural interface is a great example – it uses similar technology to EEG but focuses on muscle signals (EMG). The potential to combine brain and muscle signals opens up new possibilities for intuitive human-computer interaction.
We’re also seeing interesting developments in connecting BCI with language models and exploring how the brain processes language. With new standards like Bluetooth 6.0 allowing multiple device connections, we might soon see integrated solutions that combine various sensors from different body locations.
These are just some of the many challenges and opportunities we face in BCI. The field is constantly evolving, pushing us to solve problems that have never been encountered before. It’s this constant exploration and innovation that makes working with BCI so rewarding.
Do you see any potential for incorporating BCI into your current project? If so, how could it enhance the user experience or expand the project’s capabilities?
Yes, I absolutely see potential for incorporating BCI into my current projects. In fact, I use BCI in almost all of my projects, except for some specific machine learning analysis tasks.
There are two main reasons I’m drawn to using BCI. First, I’m passionate about making gaming more inclusive for people with different motor abilities, a demographic that has been somewhat overlooked in the industry. BCI has the potential to create fun and engaging gaming experiences that are accessible to everyone.
Secondly, BCI opens up exciting possibilities to study new types of interactions and user experiences. We can explore cooperation and coordination between players by mixing brain input with standard inputs and even other modalities like EMG input from the wrists. This field is ripe for exploration, similar to the early days of VR.
BCI can augment the user experience by creating an additional layer of interaction for everyone. It’s not about replacing standard input entirely, but rather about providing more options and flexibility. For example, in VR or XR experiences where you typically need two hands for control, we could replace some buttons with brain input. This would allow more people, including those missing fingers or with limited mobility, to fully engage with the experience.
However, incorporating BCI also presents unique design challenges. We need to consider universal design principles to ensure that these new input methods are usable and intuitive for everyone, regardless of their abilities. For example, we need to think carefully about how users will calibrate and control the BCI system, as well as how to provide clear and timely feedback.
Ultimately, by incorporating BCI, we’re creating more inclusive and innovative gaming experiences. It’s opening up a whole new realm of possibilities for research and development in user interaction, making digital experiences accessible to a broader range of people.
Tell me about a time you had a breakthrough moment during a hackathon. What was the problem, what was the solution, and what made that moment so impactful for you?
Every hackathon presents its own set of challenges and triumphs, but my most memorable breakthrough happened recently, at a hackathon where the team from my lab (CIMIL) and I went on to win a top prize.
Our project aimed to give musicians control over the soundscape of a jam session using their brainwaves. We developed a console where they could select different soundscapes via brain signals, without needing to switch from their instrument to a keyboard or use their legs to control some pedals.
The problem arose when our tester musician did a guitar solo, shaking their head. This motion caused a cascade of unintended inputs, constantly switching the soundscapes. With only a couple of hours left until submission, we felt the pressure mounting.
Then came a pivotal moment of realization. We had continuous information about signal quality from the electrodes! This led us to design a thresholding mechanism that automatically switches off the BCI when one-third of the electrodes become unreliable, turning it back on when the signal stabilizes.
The impact was immediate and significant. The musician could now headbang freely without triggering unintended changes, yet still utilize the BCI when the signal was stable. The difference in usability was remarkable. This is a good example of working around the constraints and affordances of the technology, as mentioned earlier.
This experience also taught us invaluable lessons about teamwork under pressure and the importance of having a backup plan. If you participate in hackathons, the first thing you learn is that coordination of efforts and delegation of tasks is essential to complete the project in the given time. A valuable lesson that can be applied to other contexts that involve teamwork.
Beyond the win, the project’s success gave us credibility in the BCI community, leading to (hopefully) a publication and a demo session during a conference next fall.. It was a good achievement for my career, but most importantly it was fun and inspiring working with such a creative team.
How could BCI technology be used to make VR/XR experiences more inclusive for individuals with disabilities? What specific challenges and opportunities do you see in this area?
I already see VR/XR experiences as more inclusive in some ways. For example, they allow people to reach places that would be physically challenging, like someone in a wheelchair being able to virtually climb Mount Everest. However, current XR technologies are most often designed with the assumption that users have two fully functional arms, which is a significant limitation.
BCI technology could address this limitation and make VR/XR experiences even more inclusive for individuals with disabilities in several ways
- Alternative Input: BCI can provide a way of input for people who have limited or no motor abilities. For instance, if someone can’t press a specific button like the MENU button, BCI could allow them to navigate using conscious brain commands. .
- Enhanced User Experience: For people with limited mobility, BCI can improve their overall experience by reducing the need for physical interactions.
- Adaptive Environments: Passive BCI could adapt the virtual environment for people who might struggle with standard interfaces. This could be particularly helpful for neurodivergent individuals or those with ADHD, making it easier for them to follow the flow of an application.
- Multimodal Interfaces: By combining BCI with other technologies like eye-tracking or EMG (electromyography), we could create more comprehensive and accessible interfaces. This combination could potentially map all essential commands needed to navigate a virtual world, access menus, and interact with the environment.
The opportunities are significant. We could extend current XR applications to be usable by a much wider range of people, potentially revolutionizing how individuals with disabilities interact with digital environments.
However, there are challenges. The main one is probably the cooperation between different companies. Nobody is doing everything – each company specializes in one aspect. It’s usually up to academia to try different combinations of technologies. Then a company might invest in a combination that works, but everyone is very protective of what they’re making.
Another challenge is convincing creators and innovators that cooperation is important and beneficial for the market. There’s a tendency to want to be the first to create the “next big thing,” even if it means delaying progress by years.
Despite these challenges, I’m optimistic about the potential. The trend towards open-sourcing in tech is promising. Some big tech companies are pushing for open-sourcing part of their stack, not just out of kindness, but because they see the economic value in growing a community around their technology. This approach could help with standardization and accelerate progress in making XR more inclusive through BCI technology.
What are your thoughts on the ethical considerations surrounding the use of BCI in VR/XR? What are some potential concerns, and how could these concerns be addressed?
I have many ethical concerns about the use of BCI in VR/XR. The main issue is the sheer amount of data being collected. If you think about social media, they track your activity when you’re using them. But with BCI and VR/XR, it’s different. When you’re collecting bio signals, there’s always something going on. If the sensor is on and collecting data, your body is constantly sending information. So there’s a much larger amount of data being collected.
This was also one of the concerns when VR became mainstream. It’s not just what you do inside VR, but you have all these cameras tracking around your house, plus gyroscope and accelerometer data. In VR, if you have all the 3D geometry of a room, you know where the person is, when, and what they’re doing. And if you also have brain information on top of this, you really can get a lot out of it, especially with algorithms getting better and better. You’ll know about engagement, emotions, workload, and stress in every moment in our houses, in our daily lives. So, there’s a lot that’s potentially dangerous about it.
My potential concerns include
- Privacy invasion: The amount and type of data collected could lead to unprecedented levels of insight into a person’s private life and thoughts.
- Data ownership: Who really owns this data? This is a crucial question that needs to be addressed.
- Misuse of data: While this data could create incredible innovations, it could also be used for very targeted and invasive advertising or political propaganda.
- Psychological impact: As we saw with the example of the smartwatch heart attack risk app, too much information can negatively impact people’s behavior and quality of life.
To address these concerns, I believe we need to keep regulating how data is processed, something that we kinda care about in Europe. Of course regulation should not choke the usage of data, but rather promote responsible use. And this includes also the end user, who should be aware what data is being collected and how it’s being used. Anonymization, when possible, and transparency of usage can strengthen the trust of the users towards service providers. Ultimately, I think it’s a matter of control over the data. You (company) give me (user) choice over which data I share, for what, for how long and so on. Another option is some type of retribution, few companies already provide a fair value in exchange for user data and that is a very fine alternative to what was being done just a few years ago, before GDPR.
Ultimately, it’s not about using or not using this technology, but how we handle the data. We need to find a balance between leveraging the benefits of this technology and protecting individual privacy and wellbeing. It’s a complex issue, but one we need to address as these technologies become more prevalent.
How do you think BCI technology will impact user privacy and data security? What measures should be in place to protect users’ neurological data?
I believe BCI technology will have a significant impact on user privacy and data security, similar to what we’ve seen with social networks. There’s an increasing interest in emotion recognition, which could lead to more targeted advertising and propaganda, especially in XR environments. This is concerning because neurological data is extremely sensitive and personal.
Given our experiences over the past 15 years with social media and big data collection, we should have learned about the risks and what’s acceptable. I think we need to go beyond just having users sign an informed consent form once and then never reminding them again. Users should know in real-time when their data is being collected and analyzed.
For neurological data, I believe we need more granular and continuous control. Let me give you an example: Imagine having an implanted neural device that you can’t easily remove. It’s not like a wearable that you can just take off. In this scenario, it’s crucial to always make users aware when data is being collected. As I see it, they should be requested to explicitly consent for each collected session or automatically opt-out if ignored.
Let’s imagine this interaction with an example: A real-time notification arrives, informing the user that there might be a data collection in the coming hours. In this imaginary granular control system, the user decides. For instance, they might want to share data when they’re awake but not when they’re sleeping.
Some measures I think should be put in place to protect users’ neurological data include:
Real-time notifications of data collection
- Granular control over what data is shared and when
- Option to opt-out of data collection at any time
- Transparent explanations of how data will be used
- Strict data anonymization protocols
- Regular audits of data usage and security measures
Some may argue that users will never willingly concede data. As I explained earlier, the technology is mature enough for a mutual exchange between companies and people. It shouldn’t be invasive, and users shouldn’t be expected to give everything. Instead, there should be a fair system where users have control over their data and can choose to share it in exchange for rewards or benefits.
There will always be someone who is not interested in sharing data, and it’s their right to do so. Fortunately, companies like Meta are starting to comply with these regulations in Europe, and users are given the option to completely withdraw from data collection if they choose.
Ultimately, protecting neurological data is crucial. As BCI technology advances, we need to ensure that we’re not just repeating the mistakes we’ve seen with social media and other data-heavy technologies. We need to prioritize user privacy and give individuals real control over their most personal data. Otherwise, users will never trust and adopt the technology.
What advice do you have for people (entrepreneurs, professionals, artists, and students) looking to enter the XR/BCI industry? And how can they best position themselves for success?
For people looking to enter the XR/BCI industry, here’s my advice
- Be passionate, realistic, and ready to learn: The field is still emerging, so don’t expect quick riches. Instead, focus on the incredible learning opportunities and transferable skills you’ll gain.
- Develop multidisciplinary skills: Learn about neuroscience, signal processing, machine learning, software development, and design. For XR, focus on 3D modeling and real-time rendering. Familiarize yourself with industry-standard tools and platforms like Meta, Vive, Pico or Apple Vision for XR. There are also many emerging companies, like Lynx, that are finally competing with the big players. For BCI definitely look at OpenBCI, g.tec, AntNeuro, Neurable and many other companies and startups. Unsurprisingly, more than a few are working out how to integrate brain-control in VR.
- Focus on niche problems: For entrepreneurs, find specific issues you can solve with XR or BCI. Don’t try to boil the ocean. Professionals should explore how these technologies could enhance their current work, like architects using VR for immersive presentations or therapists using biofeedback.
- Seek hands-on experience: Students, look for internships or research opportunities at universities and companies at the forefront of these fields. There’s no substitute for practical experience.
- Engage with the community: The BCI community is welcoming and supportive. Attend conferences, join online forums, and participate in hackathons. You’ll find job opportunities, collaborations, and stay updated on advancements. I recommend starting from NeurotechX.com
- Consider various applications: Look beyond gaming to fields like healthcare, education, and workplace productivity. Artists have a unique opportunity to explore new frontiers in interactive and immersive art.
- Stay updated on hardware advancements: Keep an eye on companies working on portable sensor technologies. Hardware breakthroughs will be crucial for mass adoption.
- Be open to collaboration: This field thrives on combining different technologies and specialties. Be ready to work with people from diverse backgrounds.
- Consider ethical implications: Always keep in mind data privacy and user rights as you work in this field. This awareness will be crucial as the technology develops.
- Embrace the long game: Think of entering XR/BCI as an investment in yourself and a future where these technologies are ubiquitous. We’re probably looking at about 10 years before we see mass-produced applications, similar to VR’s journey since 2015.
Remember, the future belongs to those who dare to imagine and build new realities. So dive in, be bold, and let your passion guide you! The next 5-10 years will bring incredible advancements in XR/BCI, especially with the convergence of AI. Position yourself now for the exciting future ahead.
In your vision for the future of XR, how do you see BCI playing a role? How could it transform the way we interact with virtual worlds?
In my vision for the future of XR, BCI will be essential for creating truly immersive and intuitive experiences. Imagine a future where we don’t just experience virtual worlds but interact with them seamlessly using our thoughts and emotions.
Similar to the immersive concept portrayed in “Ready Player One” (minus the dystopian elements!), I believe BCI will allow us to seamlessly navigate and manipulate digital environments with our minds. Imagine effortlessly switching between virtual screens with a thought, controlling in-game avatars with unparalleled precision, or even feeling emotions more vividly within a VR experience.
But the potential of BCI extends far beyond entertainment. Imagine cars equipped with BCI systems that can sense driver fatigue or stress, automatically adjusting settings or even pulling over safely if needed. This kind of integration of BCI with other technologies has the potential to make our roads safer and our daily lives more convenient.
I’m confident that XR, enhanced by BCI, will become as ubiquitous as smartphones are today. We’ll likely have sleek, integrated devices that transport us to these immersive experiences effortlessly.
Based on VR’s trajectory—gaining significant traction within the past decade—I believe BCI is on a similar path. I anticipate that within the next 10 years, we’ll start seeing mass-market BCI applications integrated into our everyday lives, creating a world where the boundaries between the physical and digital are increasingly blurred.
What are some of your favorite examples of BCI applications in VR/XR, and what makes them compelling to you?
One of the most compelling BCI applications in VR/XR, for me, was NextMind. Their device, released back in 2021, was this sleek little circle that clipped onto a VR headset, essentially turning it into a brain-computer interface! Using EEG technology to detect visual attention, it offered a non-invasive and surprisingly user-friendly experience.
What blew me away was their developer kit, which allowed for building brain-interactable applications directly within Unity. Seeing demos where you could select objects, trigger events, even navigate menus—all just by focusing your attention—was incredible. I even tried a puzzle game where you moved blocks with your mind, and it felt surprisingly responsive and intuitive.
Before NextMind, my BCI experience was limited to research-grade EEG caps requiring gel, which are far from consumer-friendly. This felt different. It was clear that NextMind had put considerable thought into designing something comfortable and accessible for a wider audience, not just the very experts of the field.
While NextMind was undeniably impressive, it wasn’t without limitations. The accuracy wasn’t perfect, and it required some training. Additionally, its focus on visual attention limited its application compared to more comprehensive BCI systems.
Despite this, seeing NextMind in action was a pivotal moment for me. It sparked my imagination about the real-world potential of BCI and inspired me to pursue a PhD in the field. Even though NextMind was acquired by Snap and is no longer commercially available, it provided a glimpse into a future where our thoughts directly shape our digital experiences.
This experience, along with the work being done by companies like Neurable (exploring emotion detection in VR) and OpenBCI (developing open-source BCI platforms), makes me incredibly excited for what the future holds. I’m eager to see more sophisticated signal processing that can interpret a wider range of brain activity and the integration of BCI with other biometrics for a more holistic approach to human-computer interaction in XR.
Who have been your most important mentors? Why? How did you meet them?
I’ve been incredibly lucky to have two mentors who’ve deeply shaped my journey, both professionally and in my passion for BCI.
First, there’s Alberto, a senior developer at my first company. I was straight out of university, full of theoretical knowledge but very unsure in a real-world setting. He took me under his wing and taught me the ins and outs of coding for enterprise applications, emphasizing best practices and how to structure code effectively. But what I value most is how he taught me to approach a software project professionally. It wasn’t just about coding; it was about collaborating with colleagues, communicating with clients, and managing expectations. As he taught me, doing things his way “…makes everything so much smoother and easier for everybody…to work with me as well…to jump into my project and be guided…” His mentorship made me a far better collaborator and developer.
Then there’s Mannes, my Master’s supervisor, who really ignited my passion for BCI. He was this incredibly experienced machine learning expert who, later in his career, had become completely captivated by BCI. He had a very “sink or swim” approach to mentorship—expecting me to wrestle with problems and find my own way—which wasn’t always easy. But his constructive criticism was invaluable, and his passion was contagious! He really pushed me to think critically, and I credit his mentorship with preparing me for the challenges of a PhD.
I think a good mentor doesn’t just teach you skills, they build your confidence in what you can achieve.
What’s your favorite inspirational quote? What about the quote inspires you?
My favorite inspirational quote comes from the brilliant Italian neurobiologist and Nobel Prize winner, Rita Levi-Montalcini. She once said, “The body does what it wants, I am the mind.”
This quote deeply resonates with me, especially working in research. It’s a powerful reminder that our minds hold immense potential, regardless of our physical limitations or age. Levi-Montalcini lived by these words, remaining a vibrant intellectual force well into her later years.
This quote inspires me to constantly push the boundaries of what we think is possible with the human mind. It fuels my passion for developing BCI technology that can unlock new levels of human potential and create a more inclusive world for everyone. It’s a constant reminder that the most exciting frontiers are often the ones we haven’t even imagined yet.
Anything else you’d like to add?
If I could offer one piece of advice, especially to those just starting their careers, it would be this: don’t chase the money. Do what you love, what truly excites you, even if it doesn’t seem like the most lucrative path right now.
I know there’s a big push toward “monetizing your passion,” but I think that approach can backfire. When you turn something you love into a grind, it can suck the joy right out of it. You might even burn out before you reach your full potential.
Instead, focus on becoming really good at what you’re passionate about. Explore different fields, experiment, and allow yourself to be genuinely curious. If you’re truly dedicated to your craft and you become exceptionally good at it, opportunities—including financial ones—will inevitably present themselves.
Don’t get me wrong, I’m not saying money doesn’t matter. But I firmly believe that creativity, innovation, and true fulfillment come from a place of genuine passion. Choose a path that you’ll be excited to explore, even if it means taking some risks or veering off the well-trodden path. You might be surprised where you end up!
Find Michele on LinkedIn and learn more about his work at Github
Know someone who should be interviewed for an XR Creator Spotlight? Please email us at hello@xrcreators.org.