Artwork

Sisällön tarjoaa XR for Business and Alan Smithson from MetaVRse. XR for Business and Alan Smithson from MetaVRse tai sen podcast-alustan kumppani lataa ja toimittaa kaiken podcast-sisällön, mukaan lukien jaksot, grafiikat ja podcast-kuvaukset. Jos uskot jonkun käyttävän tekijänoikeudella suojattua teostasi ilman lupaasi, voit seurata tässä https://fi.player.fm/legal kuvattua prosessia.
Player FM - Podcast-sovellus
Siirry offline-tilaan Player FM avulla!

XR Podcast Hosts Unite, with Voices of VR Podcast’s Kent Bye – Part 1

35:53
 
Jaa
 

Manage episode 353483872 series 2763175
Sisällön tarjoaa XR for Business and Alan Smithson from MetaVRse. XR for Business and Alan Smithson from MetaVRse tai sen podcast-alustan kumppani lataa ja toimittaa kaiken podcast-sisällön, mukaan lukien jaksot, grafiikat ja podcast-kuvaukset. Jos uskot jonkun käyttävän tekijänoikeudella suojattua teostasi ilman lupaasi, voit seurata tässä https://fi.player.fm/legal kuvattua prosessia.

One of Alan’s biggest inspirations to start XR for Business was the prolific catalogue of Kent Bye, who has released 884 recordings for his VR-centric podcast, Voices of VR. Alan has Kent on the show for a chat that was too big for one episode! Check out Part 2 later this week.

Alan: Hey, everyone, Alan Smithson here, the XR for Business Podcast. Coming up next, we have part one of a two part series, with the one and only Kent Bye from Voices Of VR. Kent Bye is a truly revolutionary person and he has recorded over 1,100 episodes of the Voices Of VR podcast. And we are really lucky to have him on the show. And this is two parts, because it goes on and on. Welcome to Part 1 of the XR for Business Podcast, with Kent Bye from the Voices Of VR podcast.

Kent has been able to speak peer to peer with VR developers, cultivating an audience of leading VR creators who consider the Voices Of VR podcast a must listen, and I have to agree. He’s currently working on a book answering the question he closes with every interview he does, “What is the ultimate potential of VR?” To learn more about the Voices Of VR and sign up for the podcast. it’s voicesofVR.com. And with that, I want to welcome an instrumental person to my knowledge and information of this industry. Mr. Kent Buy, it’s really a pleasure to have you on the show.

Kent: Hey, Alan. It’s great to be here. Thanks for having me.

Alan: Oh, thank you so much. I listen to probably the first two or three hundred episodes of your podcast, and I went from knowing literally nothing about this industry to knowing a lot. And it’s those insights that you’re able to pull out from the industry that’s just amazing. So thank you for being the voice of this industry.

Kent: Yeah. And when I started the podcast, I wanted to learn about what was happening in the industry. And so I felt like one of the best ways to do that was to go to these different conferences, and to talk to the people who were on the front lines of creating these different experiences. And so at this point, I think I’ve recorded over 1,100 different interviews and have published over 760 of them so far. So it’s about for every two interviews I publish, I have like another interview that I haven’t. So I just feel like it’s important to be on the front lines, going to these gatherings where the community’s coming together and to just be talking to people and see what they’re saying. See what the power of this new medium is.

Alan: I had the honor of being interviewed by you at one of these conferences. I don’t know if it ever got published, but it was an honor anyway just to speak with you on the subject. But you get to talk to literally everybody, anybody who’s anybody in this industry. And it’s really an amazing experience to listen to these podcasts. And you really go deep into the technology of it, the listeners of this podcast are more maybe in the business, maybe they’re not really into VR. What are some of the business use cases that you’ve seen from these people that you’ve been interviewing that made you go, “Wow, this is incredible?”

Kent: Well, first of all, virtual and augmented reality as a medium is a new paradigm of computing: spatial computing. And I think one metaphor to think about is how we usually enter into the computer is by pushing buttons and moving a mouse around. And it’s almost like we have to translate our thoughts into a very linear interface in order to interact with computing. And it’s usually also in a 2D space, so a lot of times interacting and designing for 3D spaces. And so there’s kind of like this weird translation that you have to do all these abstractions in order to do computing. So I feel like one of the big trends that’s happening right now is that with spatial computing, it’s becoming a lot more natural and lot more intuitive.

And so anybody that’s doing design and 3D objects, it’s almost like a no-brainer, whether it’s in architecture, or designing 3D objects, or big aerospace, airplanes, cars. All these different people who are making these 3D objects in these CAD programs, there’s just something that you can make design decisions lot faster when you’re actually immersed into this space. And you don’t have to spend all this money to prototype these things out. So you see a lot of it in architecture, engineering, construction. But what I’m really excited about is these other aspects of natural communication. So how is AI going to be combined with these spatial computing platforms, being able to detect what we’re looking at with a Hololens 2, and to be able to then speak these different affordances and actions. We’re going to get to the point where you can just say something and just speak, much more like you would interface with other humans. And I think the computer technology is gonna become better and better at being able to detect what we are intending, what we’re saying.

I said the other huge area that we’re seeing just enormous amount of applications is in training. And really when you’re training, you really want to ideally do it yourself and be immersed into the context of the environment, to have all the emotions that are coming up when you’re under pressure to make a decision. But to be also embedded into a context that is mimicking what the real world situation is. And then you have to make choices and take action. And the action that you’re taking within VR is often very similar to those same embodied interactions that you may be doing in real life. So I feel like there’s so much of a mirroring of what’s happening in these virtual worlds that the training applications are just incredible, in terms of whether it’s a surgical simulation or Walmart’s using it to train for different employees. Elite sport athletes can do lots of different repetitions and be able to train themselves to have a level of situational awareness.

I’d say those are the big ones that I’m seeing right now. In the future I expect to see a lot more information visualization, data visualization, finding completely new ways to analyze data, symbolically and spatially. I think there’s a lot of work that can still be done. But a lot of things that I think about also is just like flow states, like what does it mean to work and how can you cultivate the deepest flow state that you possibly can, so that when you’re working you’re just not having the technology get in your way, but you’re having technology amplify what you’re able to do. So another big area that I’m seeing sort of early indication with, especially when I went to the Laval Virtual in France — it’s an expo that’s going for the last 21 years — this concept of open innovation. So collaboration and communication. Remote assist is another sort of separate thing. But in terms of innovation, what is the keys of innovation? And I think a big part of it is being able to openly share and ideate and brainstorm and tap into the more creative aspects of what you’re doing.

And so I’m seeing a lot of– like Desart Systems was working on some specific products for open innovation, which I’m excited about because a lot of what you’re seeing with augmented reality is for people who are first line workers. So people who are in factory floors, or people who are meeting assistance for remote collaboration, or the people who are on the grounds physically doing these different actions, whether it’s on a construction site or a factory floor. So a lot of the use cases for the Hololens have been very much in that realm. But I’m also really interested in terms of knowledge work, like what does it mean to be able to collaborate with other people and to lower all the barriers?

Alan: We had Jacob Lowenstein from Spatial on the show.

Kent: Oh, cool. Yeah. Yeah, I just talked to Anand [Agarawala] — who’s the CEO of Spatial — and saw the demo and just did a whole breakdown of all what they’re doing with Spatial.

Alan: Well, that speaks to exactly what you were saying; design work and collaboration and higher level work collaboration in augmented reality.

Kent: Yeah, I think that it’s still very early, but just– it’s also very early in terms of having this completely new paradigm for how you do spatial computing. I think there’s going to be a mix of sort of a, flashy Hollywood things that you see where the famous like Minority Report, where you’re kind of going through these different interfaces. That looks great, but it doesn’t always feel great if you have to do that for eight hours a day.

Alan: Yeah.

Kent: I think the key breakthrough is gonna be when you’re able to just not think about it, and kind of naturally move your body and be able to interface with computing with your full body. Because there’s this neuroscience concept, it’s called embodied cognition. And what that means is that we don’t just think with our minds, we think with our entire bodies. And so what does it mean to actually get your body engaged and moving around? It actually makes you think better. And anybody who likes to take meetings while they’re walking, you may find that you may have a different way of brainstorming and ideating when you’re actually in motion with another person. I think that spatial computing is actually going to be leveraging a lot of those different types of concepts, in that we spent a lot of time very stagnant and sitting in our desk. But a lot of the affordances of VR when you’re actually moving your body around, it actually is tapping into deeper levels of the way that you think. So I think that there’s gonna be huge potential for what’s it mean to be able to tap into that?

Alan: Absolutely. It’s really an exciting time. I– personally I do walking meetings all the time. And I can tell you, it just– it’s not the same to have a phone meeting or seated meeting when you’re walking that just sparks something. And I know Steve Jobs was a big advocate of walking meetings. So there must be something to it.

Kent: Yeah. And I think that I’m starting to see that spatial computing is going to be tapping into that. I’d also throw out there, that there’s supposed AR frames.

Alan: Yeah.

Kent: And I expect people going to be wearing like these sunglasses that are kind of shooting spatial audio into your ears, but being able to tie with your phone, getting GPS, and being able to — basically — detect which direction you’re looking at. There’s gonna be a lot of innovation that happens in just overlaying layers of audio on top of reality. We’ll eventually have digital objects on top of reality, but I think there’s a lot of innovation that’s happening, at least in the storytelling around, where when I go to these different film festivals, I kind of see what the storytelling potential is with these mediums. And I feel like there’s going to be this great convergence at some point in terms of figuring out how to engage people within a story to help teach them these different concepts. And I think that’s kind of like the next frontier of what is the blending of the storytelling affordances of VR on top of like the gamification game elements. You kind of have Hollywood mashed up with the game developer community, and VR is like this melting pot of all these different disciplines.

And so that’s what makes it so fascinating to me, is that you get people from every different domain has something to say about VR and AR, because it’s all about modulating the human experience. So I think we’re in this kind of very early Wild West era, where there’s not a lot of very specific best practices or experiential design theories that have been well established, and so you kind of have to figure it out on your own. But I feel like there’s enough proof of concepts to show that it’s effective. But to really tap into the deep ultimate potential, I think we’re still quite a ways of doing that. But one of the sort of dark horses — I’d say — for the enterprise is that there’s going to be an element of story and storytelling there, to really fully engage people. And I think we’re still very early in that era. Like with film, there is a cinema of attractions, where they were still trying to really figure out the language of the medium. I feel like we’re in a very similar spot, where they haven’t really figured out all the different affordances of the language that you use for spatial computing. It’s kind of an exciting time, just because there’s a lot of experiments to be done and a lot of stuff that still needs to be figured out.

Alan: It’s true. We see it every day, where things– I actually came up with this quote, “How do you disrupt an industry constantly disrupting itself?” And every day something comes on the news in virtual and augmented reality that flips the industry on its head. I mean, the introduction of ARKit and ARCore probably put 200 startups out of business. And we’re seeing these kind of rapid advances in technology. We’ve got AR platforms being hosted by Amazon, by Facebook, by Snapchat, where you can develop your own AR lenses. Anybody can do this, not just developers. So I think there’s going to be a democratization on the creation side, as well as this expansion on the enterprise side, which will — in my opinion — drive the consumer market forward.

Kent: Yeah, I feel like VR and AR is such an interesting realm to cover, just because it’s helping define what the human experience is and all the different contexts that we have. Because there are going to be entertainment applications, medical applications, and ways to communicate with our partners — whether it’s our romantic partners or business partners — being able to deal with death and grieving, spirituality applications in terms of connecting to myth and story and philosophy, but also our career and what we’re using in our jobs, connecting to friends and family, dealing with isolation or neurodegenerative diseases. Expression of identity is a huge thing with the facial filters, and I’ve seen that a lot more in the consumer space. But the different ways that we have virtual embodiment, and what does it mean to take on different characters and different bodies, and financial like virtual economies, as well as communication and education, and connecting to home and family. So I feel like there’s all these different specific contexts that they each are going to teach us something new about what it means to be human.

And I’d say that the difference between VR and AR through this lens of context is with VR, you’re able to completely shift your context. So you may be at home, but you’re all of a sudden now you’re completely embedded within an office meeting and now you’re at work. So you’re kind of be able to do this huge context switch. But with AR, it’s less about doing a complete context switch, because if you’re at home, you’re already at home and you may be able to overlay different people inside of your existing context, but you’re still in that center of gravity, of whatever context you happen to be in. So I’d see that with AR, you’re gonna be still embedded and grounded into whatever context you are, but you’re able to kind of pull things in. I think it’s gonna be harder to do a complete context shift with AR, but as AR and VR start to eventually converge, maybe you will see that a little bit more. But that’s at least some of the ways that I’m seeing a little bit of the differences for.

For example, if you want to do an architectural visualization, it may be better to do that in VR, because you’re able to completely shift your context and be completely immersed with that environment. But if you’re trying to look and have a group conversation with five different people about a 3D object, maybe you want to have that in AR — especially if you’re co-located with each other in the same room — so you can have all the affordances of body language and communication that we all have. But if you still want to talk about these virtual objects and maybe having either a Hololens on your head and maybe there’s some tablet’s where there’s different ways of accessing and annotating these different 3D objects. So those are some of the different use cases that I am seeing, at least at this point.

Alan: Yeah, it’s interesting. Jacob from Spatial was mentioning about– because I asked him “Why wouldn’t somebody just put VR on and go into a collaboration room?” And their response actually, I hadn’t considered was: when you’re in a group — let’s say you’re four people in an office, and you’re face to face — you still want to see those people. You don’t want to be in four different VR headsets even though you’re in the same room. It would be weird. Whereas you can also then have the four people that are in the room, and have a fifth person who is somewhere else in the world kind of beaming in,, and those four people then beaming out to them. It really creates this feeling of community and a lot of times you also want to be able to see other devices while you’re in there. And we’re getting to the point where we’re going to be able to port our devices into VR, our computer screens and our phone screen or whatever, but we’re not quite there yet.

Kent: It reminds me of– I have gone to Microsoft Build for the last three years, and that’s a good place to kind of see some of the AR demos that are there in terms of the partners with Microsoft. And so with some of the demos that I’ve seen there were for people who were doing sales for say, medical equipment. Sometimes the medical equipment is very specific to the context of a specific room. And so I think people who are doing sales would be able to look at the existing context of a room and start to overlay these digital objects on that room, but still have that face to face interaction. And especially with the Hololens 2, where you can kind of flip up the visor if you want to look people directly in the eye. But I feel like just in talking to different people, the sales increase in terms of being able to have them see what it was actually going to look like in that context. It also just– Lowe’s and these different companies that when you go to like Home Depot or Lowe’s, they’ll do these whole build outs of an entire kitchen.

But oftentimes you may have a very specific thing you’re looking for and you want to know what that looks like in the context of your kitchen. So being able to detect your space in your context and then put that object — whether it’s a refrigerator or whatever it is — into your– into that context, it lowers the cognitive load of imagination, because it actually is very difficult for you to imagine what it’s going to look like. And you have to kind of just see it before you really know whether it’s going to work or not. And if you can do that and preserve that existing context and then lay the object in there, I think that’s another huge use case that I’m seeing. Whether it’s selling medical equipment or selling kitchen equipment for home renovation, there’s some of the unique affordances of AR as a medium.

Alan: And the great thing about that is that that’ll work on any device, that’ll work on your phone. And by the end of this year, there’ll be over 2 billion smartphones that are AR enabled with ARKit and ARCore. And so you’ll be able to put a fridge in your kitchen in context in the right size and see if it fits. Then you can drop a car in your driveway and take a picture of your new car. So I think there’s gonna be a huge push towards kind of three dimensional retail and e-commerce with these mobile devices, and that you don’t even need a headset for that. You can use the device that’s in everybody’s hands. And it’s not the same experience, but it doesn’t have to be in those cases.

Kent: There’s a interesting point that came up in my mind, as you were saying some of that. And that’s that I think a lot of enterprises, they need to see a lot of numbers in terms of the improvement of how much more efficient things are. And people like Accenture, they’ve been certainly coming up with a lot of those different quantitative studies, and I think a lot of companies would want to see that. What is the return of investment for jumping into immersive? And I think those are important to be able make those decisions. I think it’s also important to point out that there’s a lot of benefits for spatial computing that maybe never be able to be quantified with a specific number. There’s a certain quality of experience that happens, that I feel like there’s a whole realm of the usefulness of these spatial computing technologies that it’s going to be more behavior and cultural shifts in order to use these technologies. And specifically what I mean is that we kind of live in an information environment right now, where we really want fast bite-size information, on the level of tweets. And I kind of see spatial computing as the antithesis of that, because it’s very difficult to hop into a virtual reality experience for a few minutes, although I will say–

Alan: It’s impossible to hop in *in* a few minutes. Every time I go to use mine, I’ve got to wait for all the updates. [laughs]

Kent: Yeah, there’s all sorts of thresholds. I mean, I will say that with Oculus Quest that’s changing for me. I’ve had early access to the Quest. And I do think that the Quest is gonna be revolutionary in terms of making it easier for people to hop in.

Alan: Oh, I can’t wait.

Kent: The focus of Oculus has been much more on gaming rather than productivity applications, but they still have a number different productivity applications that are coming out. Whether that’s going to be Tilt Brush for doing rough prototyping or Gravity Sketch. And we’ll have to wait to see what other enterprise applications come out. But I do expect to see that the Quest is going to have a lot of applications. That’s the headset that has no tether, no wires. It’s completely wireless and mobile. And you’ve got these 6DOF controllers.

Alan: It’s really exciting. What’s the price point? I think it’s–

Kent: So there’s $399 for 64 gigabytes, $499 for 128 gigabytes. However for the enterprise it’s like $999 per headset, with a $180 per year. There’s a whole Oculus for Business that is going to have a whole specific offerings for the enterprise and that you get the ability to kind of turn off all of the main Oculus Home and be able to distribute just your application. And they’re working on different deployment solutions and whatnot. Because if you’re working with dozens or hundreds or thousands of headsets, then you’ve got to have some system to be able to deploy updates and software to all those headsets. And so that’s kind of the software they’re working on. But just to kind of wrap up a point that I was beginning to make, which is that I’d see virtual reality technologies to be very similar to like sitting down and reading a book, where you’re actually making a commitment to be completely immersed and focused on a very specific task. And I feel like that is becoming rarer and rarer.

And I think that’s been in some part the difficulty of why VR may have not been taking off as quickly as some would have imagined. The technology is amazing, but there’s a certain amount of cultural shift that you have to have in order to really commit to being immersed and present within a virtual experience. And I feel like once you cultivate that gets that quality of being, where you can be fully immersed. And I feel like that is tapping into other levels of focus that are becoming more and more rare within our lives. And so the levels of like focus and productivity and consciousness hacking, I expect that there’s gonna be ways for people to be able to really get into these deep flow states and potentially even start to do more work from home, especially if you work in an open office environment where it becomes more and more rare for you to really have this deep focus that you need. So I just wanted to point out that there’s a lot of emphasis right now in our culture on numbers and trying to quantify things.

And I’ve been focusing a lot also in what are the different qualities that maybe difficult to put a number on? And I think it’s like these levels of presence, these depths of connection, the intimacy that you can have when you’re face to face with somebody else. There’s all these levels of body language where you fly across the world, because you want to have that intimacy. I think eventually we’ll get there with VR, we still have a lot of ways to go in terms of body language and emotional expression, where it’s not quite the same as being face to face. And maybe it’ll always be preferable to be face to face with certain contexts. But for some situations, I think it’s gonna be a lot better to just meet in virtual world and to not have to travel as much. And especially if you’re talking about like remote collaboration with many different people, because if you use something like Zoom or Skype, it’s OK for a couple of people. But once you start to have like a group conversation with five or twelve people…

Alan: Yeah, it falls apart.

Kent: You really want to have body language, you will have a spatial audio. It’s so much more efficient to have big team meetings within virtual spaces, rather than trying to mediate it through digital technologies. And so I expect that one of things I’m really interested in seeing is like some of these different startup companies within the VR space that are remote, and they have to kind of dog food their own remote collaboration tools. I think that what that’s going to bring is that maybe there’ll be a less emphasis on specific jobs or tasks where you have to be expected to go into work. And I think eventually we’ll get to the point where maybe you could live out in the middle of the country. And as long as you have a good Internet connection, you could be still interfacing with some of the most talented and brilliant people in your disciplines or domains, and you could be anywhere in the world. And I think the potential of what that means is really exciting, because it doesn’t mean that you have to go and live in Silicon Valley or Los Angeles to be able to collaborate and work with some of these people, or whether it’s New York City or wherever it is in the world, a major city.

I see this other trend of these remote collaboration or remote work, where people are able to work from home. But the thing that’s lost is those group conversations and the more serendipitous water-cooler conversations and stuff like that. So it’ll be interesting to see how some of these remote companies are able to adapt and create these tools. And one thing that I would say from my experience of working at a remote company is that if you’re completely 100 percent remote, then it works great. But as long as you have like a critical mass of people that are face to face, then it’s really difficult to be pulling in all these other people into these remote environments just because it’s a definite context switch. So that’s some of the things that I’m — in long term — looking forward to seeing how this all sorts of play out.

Alan: Yeah. It’s– I think another thing that will make a big difference and it doesn’t seem like a big thing, but eye tracking. Being able to actually look somebody in the eyes in VR. I’ve had the opportunity of playing with the Tobii tracking system with HTC Vive. And just being able to look at somebody, look at them in the eyes and know that they’re actually looking at you. They’re not an avatar that’s kind of a disembodied, cartoonish version of themselves. And to be honest, everybody keeps trying to push towards photorealistic avatars, and there’s the uncanny valley of getting too close to reality, and then your brain kind of goes, “there’s something not right” and rejects it. But I think we can stay on the cartoonish side of things as long as we hope things like eye tracking and hand tracking. It really– it feels right. I’ve done conferences in VR where I’m speaking to 200 people, and I feel like I’ve met some of these people. We have little conversations in the hallway before or after the event. And it feels like you’ve been there. It tricks your brain into thinking you actually were there. It’s amazing.

Kent: Yeah. Both the Hololens 2 and the Magic Leap are shipping with eye tracking. And I think that the Vive, there’s gonna be an enterprise version that has eye tracking as well.

Alan: Yep.

Kent: It does make a difference, especially the social interactions. And the thing that the Hololens was really looking at — Hololens 2 — is to be able to look at objects, and that allows the computing technology to be contextually aware. It knows potentially what you’re looking at. It can detect the object — especially with virtual objects, it can know that you’re looking at the virtual objects — and you can start to use voice commands so you could look at a light and say “off” and then eventually have the light turn off. There’s a lot of talk about edge compute devices and getting away from centralized everything and being able to have these different remote sensors. And so I expect that that’s going to be a huge thing, especially if you are a company that starts to deploy a lot of these edge compute devices that are detecting different aspects in an environment. The user interface for a lot of those devices could be a layer of augmented reality in these Hololens devices or virtual reality devices, where you can start to have your command center within a virtual or augmented space.

Alan: Incredible. There’s so many opportunities and so many possibilities and –like you mentioned — it’s kind of like when the iPad was introduced. It was great. You could watch movies on it, and you could read books on it. And then all of a sudden people started making all sorts of things for it. But you look back 12 years ago, there was no such thing as an app developer, and now there’s millions of app developers. Four years ago, there was no such thing as VR developer — well, maybe five years now — but now there are probably thousands, if not hundreds of thousands, and soon to be millions of people developing for this technology medium. And it’s really about to enter this kind of exponential phase. But it’s not just VR and AR, it’s artificial intelligence, Internet of Things, 5G, quantum computing, edge computing. It’s molecular genetics. All of these technologies, all at the same time, are really going through this nascent stage where they’re entering into this exponential growth phase where they all go straight up. And since they all kind of work together, I think we’re entering in to what is quite possibly the singularity in the next 15 years. 10, maybe.

Kent: Yeah, I’m a skeptic of the concept of singularity. And the reason why is because I feel like there is human consciousness that is way more complicated than these distributed technologies and that– I mean, the theory of singularity is that at some point that the change is happening so quickly that it goes beyond the human comprehension and of understanding these systems. And if we get to that point, then I feel like something has gone seriously wrong, because I don’t think it’s about creating a sort of a self-sentient technology that is so brilliant within its own right that it doesn’t need humanity. I feel like, if anything, all these technologies are in service of humanity. But it does speak to this larger point of explainability and ethics and morality, because an artificial intelligence, at least it’s– when you start to have these very complicated deep learning algorithms and you want to know why something made a decision, then it becomes a little bit of a black box and it becomes unexplainable. So if there is a level of these different machine learning applications that are creating these models that include millions or billions of feature points that are sub-symbolic in the sense that there’s no comprehensive story that you could look at and say “Why did this determine that this was a cat and not a dog?”.

But I do think you’re right in terms of these are exponential technologies, and there’s gonna be ways in which they are combining together that are unpredictable. Just in terms of, say, who would have predicted that having a little extra bandwidth, that eventually the cellphone signals that eventually was catalyzed and inspired text messaging through the accessibility needs, that something like text messaging would be able to facilitate micro-economies in Africa. To kind of take these combinations of things and to see how they’re combined to be able to have these emergent behaviors that are a little bit hard to predict. And I feel like we’re in that realm right now, where there are going to be cryptocurrencies and the blockchain and be able to do distributed trust and sell sovereign identity. And that, in a lot of ways, I think it’s going to bootstrap what the point that I thought of when you were making that point is that, yes, there are app developers and there is a value of having a closed ecosystem to be able to do native development.

However, I do think that there’s value of having open systems and open protocols and to look at the power of the open web. Because you do have this kind of tension between the closed walled garden app ecosystems and the power of the open web. They kind of are working in antithesis to each other. I feel like they’re always going to be a dialectic between the closed and open. In some ways, the app ecosystems can be on the bleeding edge. But the downfall of being on the bleeding edge is that if you want something to still work in a year or two years or five years or 10 years, then there’s a lot of like technical debt that has to be maintained for a long time.

Alan: [laughs] Sorry, I laugh because to think that something that we build today is going to work in 10 years is almost laughable.

Kent: But there’s VRML projects that were created 20 years ago that still work today. There’s websites that were created over 25 years ago that still work today. So that’s the value of interoperable open standards, is that you *can* actually create stuff that is going to be able to be looked at in five or 10 years. I feel like that’s a dynamic conversation that I don’t hear as much about in the larger consumer VR. But in terms of the enterprise, especially if you’re working with these different systems where you don’t want to be maintaining a huge systems each and every year and just making sure that the unique build still works. There’s value being on the bleeding edge, but there’s also value of waiting for the open standards, like the OpenXR for hardware, or the WebXR, or the OpenWeb, for these open standards for identity.

So I feel like — depending on what you’re doing in the enterprise — if you do need to have stuff that is still accessible and usable in three to five to 10 years, then I think it’s worth looking at some of these other alternatives that maybe move slower. But once the WebXR 2.0 spec finally launches — within the next year or so, I imagine — then you’re going to see a huge renaissance and the alternative like OpenWeb, because I feel like not having those standards fleshed out has been leaving all the spoils left for to do development within either Unity or Unreal Engine. And for anybody who’s doing serious applications, I would definitely recommend them to do Unity or Unreal, but to also keep an eye on what’s happening in the OpenWeb space, because it’s going to be a huge part. Especially as depending on who you’re doing, the downfall for those app ecosystems is that you have these walled gardens, where you have curators who may or may not want to support or promote your different applications. If Facebook does go down the route of only looking at gaming, then if you want to create a consumer application that is usable by the enterprise, then it may be harder to get it onto the platform. So I think there’s a lot of different tensions and tradeoffs that I just wanted to kind of flesh out there.

Alan: And that concludes part one of the XR for Business Podcast with our guest, Kent Bye. Coming up next on the XR for Business Podcast, we have Kent Bye, part 2.

  continue reading

141 jaksoa

Artwork
iconJaa
 
Manage episode 353483872 series 2763175
Sisällön tarjoaa XR for Business and Alan Smithson from MetaVRse. XR for Business and Alan Smithson from MetaVRse tai sen podcast-alustan kumppani lataa ja toimittaa kaiken podcast-sisällön, mukaan lukien jaksot, grafiikat ja podcast-kuvaukset. Jos uskot jonkun käyttävän tekijänoikeudella suojattua teostasi ilman lupaasi, voit seurata tässä https://fi.player.fm/legal kuvattua prosessia.

One of Alan’s biggest inspirations to start XR for Business was the prolific catalogue of Kent Bye, who has released 884 recordings for his VR-centric podcast, Voices of VR. Alan has Kent on the show for a chat that was too big for one episode! Check out Part 2 later this week.

Alan: Hey, everyone, Alan Smithson here, the XR for Business Podcast. Coming up next, we have part one of a two part series, with the one and only Kent Bye from Voices Of VR. Kent Bye is a truly revolutionary person and he has recorded over 1,100 episodes of the Voices Of VR podcast. And we are really lucky to have him on the show. And this is two parts, because it goes on and on. Welcome to Part 1 of the XR for Business Podcast, with Kent Bye from the Voices Of VR podcast.

Kent has been able to speak peer to peer with VR developers, cultivating an audience of leading VR creators who consider the Voices Of VR podcast a must listen, and I have to agree. He’s currently working on a book answering the question he closes with every interview he does, “What is the ultimate potential of VR?” To learn more about the Voices Of VR and sign up for the podcast. it’s voicesofVR.com. And with that, I want to welcome an instrumental person to my knowledge and information of this industry. Mr. Kent Buy, it’s really a pleasure to have you on the show.

Kent: Hey, Alan. It’s great to be here. Thanks for having me.

Alan: Oh, thank you so much. I listen to probably the first two or three hundred episodes of your podcast, and I went from knowing literally nothing about this industry to knowing a lot. And it’s those insights that you’re able to pull out from the industry that’s just amazing. So thank you for being the voice of this industry.

Kent: Yeah. And when I started the podcast, I wanted to learn about what was happening in the industry. And so I felt like one of the best ways to do that was to go to these different conferences, and to talk to the people who were on the front lines of creating these different experiences. And so at this point, I think I’ve recorded over 1,100 different interviews and have published over 760 of them so far. So it’s about for every two interviews I publish, I have like another interview that I haven’t. So I just feel like it’s important to be on the front lines, going to these gatherings where the community’s coming together and to just be talking to people and see what they’re saying. See what the power of this new medium is.

Alan: I had the honor of being interviewed by you at one of these conferences. I don’t know if it ever got published, but it was an honor anyway just to speak with you on the subject. But you get to talk to literally everybody, anybody who’s anybody in this industry. And it’s really an amazing experience to listen to these podcasts. And you really go deep into the technology of it, the listeners of this podcast are more maybe in the business, maybe they’re not really into VR. What are some of the business use cases that you’ve seen from these people that you’ve been interviewing that made you go, “Wow, this is incredible?”

Kent: Well, first of all, virtual and augmented reality as a medium is a new paradigm of computing: spatial computing. And I think one metaphor to think about is how we usually enter into the computer is by pushing buttons and moving a mouse around. And it’s almost like we have to translate our thoughts into a very linear interface in order to interact with computing. And it’s usually also in a 2D space, so a lot of times interacting and designing for 3D spaces. And so there’s kind of like this weird translation that you have to do all these abstractions in order to do computing. So I feel like one of the big trends that’s happening right now is that with spatial computing, it’s becoming a lot more natural and lot more intuitive.

And so anybody that’s doing design and 3D objects, it’s almost like a no-brainer, whether it’s in architecture, or designing 3D objects, or big aerospace, airplanes, cars. All these different people who are making these 3D objects in these CAD programs, there’s just something that you can make design decisions lot faster when you’re actually immersed into this space. And you don’t have to spend all this money to prototype these things out. So you see a lot of it in architecture, engineering, construction. But what I’m really excited about is these other aspects of natural communication. So how is AI going to be combined with these spatial computing platforms, being able to detect what we’re looking at with a Hololens 2, and to be able to then speak these different affordances and actions. We’re going to get to the point where you can just say something and just speak, much more like you would interface with other humans. And I think the computer technology is gonna become better and better at being able to detect what we are intending, what we’re saying.

I said the other huge area that we’re seeing just enormous amount of applications is in training. And really when you’re training, you really want to ideally do it yourself and be immersed into the context of the environment, to have all the emotions that are coming up when you’re under pressure to make a decision. But to be also embedded into a context that is mimicking what the real world situation is. And then you have to make choices and take action. And the action that you’re taking within VR is often very similar to those same embodied interactions that you may be doing in real life. So I feel like there’s so much of a mirroring of what’s happening in these virtual worlds that the training applications are just incredible, in terms of whether it’s a surgical simulation or Walmart’s using it to train for different employees. Elite sport athletes can do lots of different repetitions and be able to train themselves to have a level of situational awareness.

I’d say those are the big ones that I’m seeing right now. In the future I expect to see a lot more information visualization, data visualization, finding completely new ways to analyze data, symbolically and spatially. I think there’s a lot of work that can still be done. But a lot of things that I think about also is just like flow states, like what does it mean to work and how can you cultivate the deepest flow state that you possibly can, so that when you’re working you’re just not having the technology get in your way, but you’re having technology amplify what you’re able to do. So another big area that I’m seeing sort of early indication with, especially when I went to the Laval Virtual in France — it’s an expo that’s going for the last 21 years — this concept of open innovation. So collaboration and communication. Remote assist is another sort of separate thing. But in terms of innovation, what is the keys of innovation? And I think a big part of it is being able to openly share and ideate and brainstorm and tap into the more creative aspects of what you’re doing.

And so I’m seeing a lot of– like Desart Systems was working on some specific products for open innovation, which I’m excited about because a lot of what you’re seeing with augmented reality is for people who are first line workers. So people who are in factory floors, or people who are meeting assistance for remote collaboration, or the people who are on the grounds physically doing these different actions, whether it’s on a construction site or a factory floor. So a lot of the use cases for the Hololens have been very much in that realm. But I’m also really interested in terms of knowledge work, like what does it mean to be able to collaborate with other people and to lower all the barriers?

Alan: We had Jacob Lowenstein from Spatial on the show.

Kent: Oh, cool. Yeah. Yeah, I just talked to Anand [Agarawala] — who’s the CEO of Spatial — and saw the demo and just did a whole breakdown of all what they’re doing with Spatial.

Alan: Well, that speaks to exactly what you were saying; design work and collaboration and higher level work collaboration in augmented reality.

Kent: Yeah, I think that it’s still very early, but just– it’s also very early in terms of having this completely new paradigm for how you do spatial computing. I think there’s going to be a mix of sort of a, flashy Hollywood things that you see where the famous like Minority Report, where you’re kind of going through these different interfaces. That looks great, but it doesn’t always feel great if you have to do that for eight hours a day.

Alan: Yeah.

Kent: I think the key breakthrough is gonna be when you’re able to just not think about it, and kind of naturally move your body and be able to interface with computing with your full body. Because there’s this neuroscience concept, it’s called embodied cognition. And what that means is that we don’t just think with our minds, we think with our entire bodies. And so what does it mean to actually get your body engaged and moving around? It actually makes you think better. And anybody who likes to take meetings while they’re walking, you may find that you may have a different way of brainstorming and ideating when you’re actually in motion with another person. I think that spatial computing is actually going to be leveraging a lot of those different types of concepts, in that we spent a lot of time very stagnant and sitting in our desk. But a lot of the affordances of VR when you’re actually moving your body around, it actually is tapping into deeper levels of the way that you think. So I think that there’s gonna be huge potential for what’s it mean to be able to tap into that?

Alan: Absolutely. It’s really an exciting time. I– personally I do walking meetings all the time. And I can tell you, it just– it’s not the same to have a phone meeting or seated meeting when you’re walking that just sparks something. And I know Steve Jobs was a big advocate of walking meetings. So there must be something to it.

Kent: Yeah. And I think that I’m starting to see that spatial computing is going to be tapping into that. I’d also throw out there, that there’s supposed AR frames.

Alan: Yeah.

Kent: And I expect people going to be wearing like these sunglasses that are kind of shooting spatial audio into your ears, but being able to tie with your phone, getting GPS, and being able to — basically — detect which direction you’re looking at. There’s gonna be a lot of innovation that happens in just overlaying layers of audio on top of reality. We’ll eventually have digital objects on top of reality, but I think there’s a lot of innovation that’s happening, at least in the storytelling around, where when I go to these different film festivals, I kind of see what the storytelling potential is with these mediums. And I feel like there’s going to be this great convergence at some point in terms of figuring out how to engage people within a story to help teach them these different concepts. And I think that’s kind of like the next frontier of what is the blending of the storytelling affordances of VR on top of like the gamification game elements. You kind of have Hollywood mashed up with the game developer community, and VR is like this melting pot of all these different disciplines.

And so that’s what makes it so fascinating to me, is that you get people from every different domain has something to say about VR and AR, because it’s all about modulating the human experience. So I think we’re in this kind of very early Wild West era, where there’s not a lot of very specific best practices or experiential design theories that have been well established, and so you kind of have to figure it out on your own. But I feel like there’s enough proof of concepts to show that it’s effective. But to really tap into the deep ultimate potential, I think we’re still quite a ways of doing that. But one of the sort of dark horses — I’d say — for the enterprise is that there’s going to be an element of story and storytelling there, to really fully engage people. And I think we’re still very early in that era. Like with film, there is a cinema of attractions, where they were still trying to really figure out the language of the medium. I feel like we’re in a very similar spot, where they haven’t really figured out all the different affordances of the language that you use for spatial computing. It’s kind of an exciting time, just because there’s a lot of experiments to be done and a lot of stuff that still needs to be figured out.

Alan: It’s true. We see it every day, where things– I actually came up with this quote, “How do you disrupt an industry constantly disrupting itself?” And every day something comes on the news in virtual and augmented reality that flips the industry on its head. I mean, the introduction of ARKit and ARCore probably put 200 startups out of business. And we’re seeing these kind of rapid advances in technology. We’ve got AR platforms being hosted by Amazon, by Facebook, by Snapchat, where you can develop your own AR lenses. Anybody can do this, not just developers. So I think there’s going to be a democratization on the creation side, as well as this expansion on the enterprise side, which will — in my opinion — drive the consumer market forward.

Kent: Yeah, I feel like VR and AR is such an interesting realm to cover, just because it’s helping define what the human experience is and all the different contexts that we have. Because there are going to be entertainment applications, medical applications, and ways to communicate with our partners — whether it’s our romantic partners or business partners — being able to deal with death and grieving, spirituality applications in terms of connecting to myth and story and philosophy, but also our career and what we’re using in our jobs, connecting to friends and family, dealing with isolation or neurodegenerative diseases. Expression of identity is a huge thing with the facial filters, and I’ve seen that a lot more in the consumer space. But the different ways that we have virtual embodiment, and what does it mean to take on different characters and different bodies, and financial like virtual economies, as well as communication and education, and connecting to home and family. So I feel like there’s all these different specific contexts that they each are going to teach us something new about what it means to be human.

And I’d say that the difference between VR and AR through this lens of context is with VR, you’re able to completely shift your context. So you may be at home, but you’re all of a sudden now you’re completely embedded within an office meeting and now you’re at work. So you’re kind of be able to do this huge context switch. But with AR, it’s less about doing a complete context switch, because if you’re at home, you’re already at home and you may be able to overlay different people inside of your existing context, but you’re still in that center of gravity, of whatever context you happen to be in. So I’d see that with AR, you’re gonna be still embedded and grounded into whatever context you are, but you’re able to kind of pull things in. I think it’s gonna be harder to do a complete context shift with AR, but as AR and VR start to eventually converge, maybe you will see that a little bit more. But that’s at least some of the ways that I’m seeing a little bit of the differences for.

For example, if you want to do an architectural visualization, it may be better to do that in VR, because you’re able to completely shift your context and be completely immersed with that environment. But if you’re trying to look and have a group conversation with five different people about a 3D object, maybe you want to have that in AR — especially if you’re co-located with each other in the same room — so you can have all the affordances of body language and communication that we all have. But if you still want to talk about these virtual objects and maybe having either a Hololens on your head and maybe there’s some tablet’s where there’s different ways of accessing and annotating these different 3D objects. So those are some of the different use cases that I am seeing, at least at this point.

Alan: Yeah, it’s interesting. Jacob from Spatial was mentioning about– because I asked him “Why wouldn’t somebody just put VR on and go into a collaboration room?” And their response actually, I hadn’t considered was: when you’re in a group — let’s say you’re four people in an office, and you’re face to face — you still want to see those people. You don’t want to be in four different VR headsets even though you’re in the same room. It would be weird. Whereas you can also then have the four people that are in the room, and have a fifth person who is somewhere else in the world kind of beaming in,, and those four people then beaming out to them. It really creates this feeling of community and a lot of times you also want to be able to see other devices while you’re in there. And we’re getting to the point where we’re going to be able to port our devices into VR, our computer screens and our phone screen or whatever, but we’re not quite there yet.

Kent: It reminds me of– I have gone to Microsoft Build for the last three years, and that’s a good place to kind of see some of the AR demos that are there in terms of the partners with Microsoft. And so with some of the demos that I’ve seen there were for people who were doing sales for say, medical equipment. Sometimes the medical equipment is very specific to the context of a specific room. And so I think people who are doing sales would be able to look at the existing context of a room and start to overlay these digital objects on that room, but still have that face to face interaction. And especially with the Hololens 2, where you can kind of flip up the visor if you want to look people directly in the eye. But I feel like just in talking to different people, the sales increase in terms of being able to have them see what it was actually going to look like in that context. It also just– Lowe’s and these different companies that when you go to like Home Depot or Lowe’s, they’ll do these whole build outs of an entire kitchen.

But oftentimes you may have a very specific thing you’re looking for and you want to know what that looks like in the context of your kitchen. So being able to detect your space in your context and then put that object — whether it’s a refrigerator or whatever it is — into your– into that context, it lowers the cognitive load of imagination, because it actually is very difficult for you to imagine what it’s going to look like. And you have to kind of just see it before you really know whether it’s going to work or not. And if you can do that and preserve that existing context and then lay the object in there, I think that’s another huge use case that I’m seeing. Whether it’s selling medical equipment or selling kitchen equipment for home renovation, there’s some of the unique affordances of AR as a medium.

Alan: And the great thing about that is that that’ll work on any device, that’ll work on your phone. And by the end of this year, there’ll be over 2 billion smartphones that are AR enabled with ARKit and ARCore. And so you’ll be able to put a fridge in your kitchen in context in the right size and see if it fits. Then you can drop a car in your driveway and take a picture of your new car. So I think there’s gonna be a huge push towards kind of three dimensional retail and e-commerce with these mobile devices, and that you don’t even need a headset for that. You can use the device that’s in everybody’s hands. And it’s not the same experience, but it doesn’t have to be in those cases.

Kent: There’s a interesting point that came up in my mind, as you were saying some of that. And that’s that I think a lot of enterprises, they need to see a lot of numbers in terms of the improvement of how much more efficient things are. And people like Accenture, they’ve been certainly coming up with a lot of those different quantitative studies, and I think a lot of companies would want to see that. What is the return of investment for jumping into immersive? And I think those are important to be able make those decisions. I think it’s also important to point out that there’s a lot of benefits for spatial computing that maybe never be able to be quantified with a specific number. There’s a certain quality of experience that happens, that I feel like there’s a whole realm of the usefulness of these spatial computing technologies that it’s going to be more behavior and cultural shifts in order to use these technologies. And specifically what I mean is that we kind of live in an information environment right now, where we really want fast bite-size information, on the level of tweets. And I kind of see spatial computing as the antithesis of that, because it’s very difficult to hop into a virtual reality experience for a few minutes, although I will say–

Alan: It’s impossible to hop in *in* a few minutes. Every time I go to use mine, I’ve got to wait for all the updates. [laughs]

Kent: Yeah, there’s all sorts of thresholds. I mean, I will say that with Oculus Quest that’s changing for me. I’ve had early access to the Quest. And I do think that the Quest is gonna be revolutionary in terms of making it easier for people to hop in.

Alan: Oh, I can’t wait.

Kent: The focus of Oculus has been much more on gaming rather than productivity applications, but they still have a number different productivity applications that are coming out. Whether that’s going to be Tilt Brush for doing rough prototyping or Gravity Sketch. And we’ll have to wait to see what other enterprise applications come out. But I do expect to see that the Quest is going to have a lot of applications. That’s the headset that has no tether, no wires. It’s completely wireless and mobile. And you’ve got these 6DOF controllers.

Alan: It’s really exciting. What’s the price point? I think it’s–

Kent: So there’s $399 for 64 gigabytes, $499 for 128 gigabytes. However for the enterprise it’s like $999 per headset, with a $180 per year. There’s a whole Oculus for Business that is going to have a whole specific offerings for the enterprise and that you get the ability to kind of turn off all of the main Oculus Home and be able to distribute just your application. And they’re working on different deployment solutions and whatnot. Because if you’re working with dozens or hundreds or thousands of headsets, then you’ve got to have some system to be able to deploy updates and software to all those headsets. And so that’s kind of the software they’re working on. But just to kind of wrap up a point that I was beginning to make, which is that I’d see virtual reality technologies to be very similar to like sitting down and reading a book, where you’re actually making a commitment to be completely immersed and focused on a very specific task. And I feel like that is becoming rarer and rarer.

And I think that’s been in some part the difficulty of why VR may have not been taking off as quickly as some would have imagined. The technology is amazing, but there’s a certain amount of cultural shift that you have to have in order to really commit to being immersed and present within a virtual experience. And I feel like once you cultivate that gets that quality of being, where you can be fully immersed. And I feel like that is tapping into other levels of focus that are becoming more and more rare within our lives. And so the levels of like focus and productivity and consciousness hacking, I expect that there’s gonna be ways for people to be able to really get into these deep flow states and potentially even start to do more work from home, especially if you work in an open office environment where it becomes more and more rare for you to really have this deep focus that you need. So I just wanted to point out that there’s a lot of emphasis right now in our culture on numbers and trying to quantify things.

And I’ve been focusing a lot also in what are the different qualities that maybe difficult to put a number on? And I think it’s like these levels of presence, these depths of connection, the intimacy that you can have when you’re face to face with somebody else. There’s all these levels of body language where you fly across the world, because you want to have that intimacy. I think eventually we’ll get there with VR, we still have a lot of ways to go in terms of body language and emotional expression, where it’s not quite the same as being face to face. And maybe it’ll always be preferable to be face to face with certain contexts. But for some situations, I think it’s gonna be a lot better to just meet in virtual world and to not have to travel as much. And especially if you’re talking about like remote collaboration with many different people, because if you use something like Zoom or Skype, it’s OK for a couple of people. But once you start to have like a group conversation with five or twelve people…

Alan: Yeah, it falls apart.

Kent: You really want to have body language, you will have a spatial audio. It’s so much more efficient to have big team meetings within virtual spaces, rather than trying to mediate it through digital technologies. And so I expect that one of things I’m really interested in seeing is like some of these different startup companies within the VR space that are remote, and they have to kind of dog food their own remote collaboration tools. I think that what that’s going to bring is that maybe there’ll be a less emphasis on specific jobs or tasks where you have to be expected to go into work. And I think eventually we’ll get to the point where maybe you could live out in the middle of the country. And as long as you have a good Internet connection, you could be still interfacing with some of the most talented and brilliant people in your disciplines or domains, and you could be anywhere in the world. And I think the potential of what that means is really exciting, because it doesn’t mean that you have to go and live in Silicon Valley or Los Angeles to be able to collaborate and work with some of these people, or whether it’s New York City or wherever it is in the world, a major city.

I see this other trend of these remote collaboration or remote work, where people are able to work from home. But the thing that’s lost is those group conversations and the more serendipitous water-cooler conversations and stuff like that. So it’ll be interesting to see how some of these remote companies are able to adapt and create these tools. And one thing that I would say from my experience of working at a remote company is that if you’re completely 100 percent remote, then it works great. But as long as you have like a critical mass of people that are face to face, then it’s really difficult to be pulling in all these other people into these remote environments just because it’s a definite context switch. So that’s some of the things that I’m — in long term — looking forward to seeing how this all sorts of play out.

Alan: Yeah. It’s– I think another thing that will make a big difference and it doesn’t seem like a big thing, but eye tracking. Being able to actually look somebody in the eyes in VR. I’ve had the opportunity of playing with the Tobii tracking system with HTC Vive. And just being able to look at somebody, look at them in the eyes and know that they’re actually looking at you. They’re not an avatar that’s kind of a disembodied, cartoonish version of themselves. And to be honest, everybody keeps trying to push towards photorealistic avatars, and there’s the uncanny valley of getting too close to reality, and then your brain kind of goes, “there’s something not right” and rejects it. But I think we can stay on the cartoonish side of things as long as we hope things like eye tracking and hand tracking. It really– it feels right. I’ve done conferences in VR where I’m speaking to 200 people, and I feel like I’ve met some of these people. We have little conversations in the hallway before or after the event. And it feels like you’ve been there. It tricks your brain into thinking you actually were there. It’s amazing.

Kent: Yeah. Both the Hololens 2 and the Magic Leap are shipping with eye tracking. And I think that the Vive, there’s gonna be an enterprise version that has eye tracking as well.

Alan: Yep.

Kent: It does make a difference, especially the social interactions. And the thing that the Hololens was really looking at — Hololens 2 — is to be able to look at objects, and that allows the computing technology to be contextually aware. It knows potentially what you’re looking at. It can detect the object — especially with virtual objects, it can know that you’re looking at the virtual objects — and you can start to use voice commands so you could look at a light and say “off” and then eventually have the light turn off. There’s a lot of talk about edge compute devices and getting away from centralized everything and being able to have these different remote sensors. And so I expect that that’s going to be a huge thing, especially if you are a company that starts to deploy a lot of these edge compute devices that are detecting different aspects in an environment. The user interface for a lot of those devices could be a layer of augmented reality in these Hololens devices or virtual reality devices, where you can start to have your command center within a virtual or augmented space.

Alan: Incredible. There’s so many opportunities and so many possibilities and –like you mentioned — it’s kind of like when the iPad was introduced. It was great. You could watch movies on it, and you could read books on it. And then all of a sudden people started making all sorts of things for it. But you look back 12 years ago, there was no such thing as an app developer, and now there’s millions of app developers. Four years ago, there was no such thing as VR developer — well, maybe five years now — but now there are probably thousands, if not hundreds of thousands, and soon to be millions of people developing for this technology medium. And it’s really about to enter this kind of exponential phase. But it’s not just VR and AR, it’s artificial intelligence, Internet of Things, 5G, quantum computing, edge computing. It’s molecular genetics. All of these technologies, all at the same time, are really going through this nascent stage where they’re entering into this exponential growth phase where they all go straight up. And since they all kind of work together, I think we’re entering in to what is quite possibly the singularity in the next 15 years. 10, maybe.

Kent: Yeah, I’m a skeptic of the concept of singularity. And the reason why is because I feel like there is human consciousness that is way more complicated than these distributed technologies and that– I mean, the theory of singularity is that at some point that the change is happening so quickly that it goes beyond the human comprehension and of understanding these systems. And if we get to that point, then I feel like something has gone seriously wrong, because I don’t think it’s about creating a sort of a self-sentient technology that is so brilliant within its own right that it doesn’t need humanity. I feel like, if anything, all these technologies are in service of humanity. But it does speak to this larger point of explainability and ethics and morality, because an artificial intelligence, at least it’s– when you start to have these very complicated deep learning algorithms and you want to know why something made a decision, then it becomes a little bit of a black box and it becomes unexplainable. So if there is a level of these different machine learning applications that are creating these models that include millions or billions of feature points that are sub-symbolic in the sense that there’s no comprehensive story that you could look at and say “Why did this determine that this was a cat and not a dog?”.

But I do think you’re right in terms of these are exponential technologies, and there’s gonna be ways in which they are combining together that are unpredictable. Just in terms of, say, who would have predicted that having a little extra bandwidth, that eventually the cellphone signals that eventually was catalyzed and inspired text messaging through the accessibility needs, that something like text messaging would be able to facilitate micro-economies in Africa. To kind of take these combinations of things and to see how they’re combined to be able to have these emergent behaviors that are a little bit hard to predict. And I feel like we’re in that realm right now, where there are going to be cryptocurrencies and the blockchain and be able to do distributed trust and sell sovereign identity. And that, in a lot of ways, I think it’s going to bootstrap what the point that I thought of when you were making that point is that, yes, there are app developers and there is a value of having a closed ecosystem to be able to do native development.

However, I do think that there’s value of having open systems and open protocols and to look at the power of the open web. Because you do have this kind of tension between the closed walled garden app ecosystems and the power of the open web. They kind of are working in antithesis to each other. I feel like they’re always going to be a dialectic between the closed and open. In some ways, the app ecosystems can be on the bleeding edge. But the downfall of being on the bleeding edge is that if you want something to still work in a year or two years or five years or 10 years, then there’s a lot of like technical debt that has to be maintained for a long time.

Alan: [laughs] Sorry, I laugh because to think that something that we build today is going to work in 10 years is almost laughable.

Kent: But there’s VRML projects that were created 20 years ago that still work today. There’s websites that were created over 25 years ago that still work today. So that’s the value of interoperable open standards, is that you *can* actually create stuff that is going to be able to be looked at in five or 10 years. I feel like that’s a dynamic conversation that I don’t hear as much about in the larger consumer VR. But in terms of the enterprise, especially if you’re working with these different systems where you don’t want to be maintaining a huge systems each and every year and just making sure that the unique build still works. There’s value being on the bleeding edge, but there’s also value of waiting for the open standards, like the OpenXR for hardware, or the WebXR, or the OpenWeb, for these open standards for identity.

So I feel like — depending on what you’re doing in the enterprise — if you do need to have stuff that is still accessible and usable in three to five to 10 years, then I think it’s worth looking at some of these other alternatives that maybe move slower. But once the WebXR 2.0 spec finally launches — within the next year or so, I imagine — then you’re going to see a huge renaissance and the alternative like OpenWeb, because I feel like not having those standards fleshed out has been leaving all the spoils left for to do development within either Unity or Unreal Engine. And for anybody who’s doing serious applications, I would definitely recommend them to do Unity or Unreal, but to also keep an eye on what’s happening in the OpenWeb space, because it’s going to be a huge part. Especially as depending on who you’re doing, the downfall for those app ecosystems is that you have these walled gardens, where you have curators who may or may not want to support or promote your different applications. If Facebook does go down the route of only looking at gaming, then if you want to create a consumer application that is usable by the enterprise, then it may be harder to get it onto the platform. So I think there’s a lot of different tensions and tradeoffs that I just wanted to kind of flesh out there.

Alan: And that concludes part one of the XR for Business Podcast with our guest, Kent Bye. Coming up next on the XR for Business Podcast, we have Kent Bye, part 2.

  continue reading

141 jaksoa

All episodes

×
 
Loading …

Tervetuloa Player FM:n!

Player FM skannaa verkkoa löytääkseen korkealaatuisia podcasteja, joista voit nauttia juuri nyt. Se on paras podcast-sovellus ja toimii Androidilla, iPhonela, ja verkossa. Rekisteröidy sykronoidaksesi tilaukset laitteiden välillä.

 

Pikakäyttöopas