TL;DR
As part of a regular Emerging Technology series the Service Integration team at the Department of Internal Affairs (DIA) brought together presenters from inside and outside of government to share how they were using Virtual and Augmented Reality to engage or train people, or perform tasks.
The technology behind Virtual Reality (VR) and Augmented Reality (AR) has advanced significantly. It now provides opportunities to deliver empathetic and insightful experiences that help people get closer to other people’s experiences, inform decision-making, lower risks, challenge our worldview, and gain efficiencies in the spaces we work and live in. Putting this technology to use is now even easier as there are products available to help non-developers to use VR and AR over the web.
If you’re interested in finding out more about emerging technologies at our regular showcase then please sign up to our mailing list. We encourage you to also view the presentations from our previous Emerging Tech sessions.
Introducing Augmented Reality
Swati Gupta from Callaghan Innovation talked about the applications of Augmented Reality and the industrial application she is planning to explore.
For a more accessible video experience, request the YouTube HTML5 video player
Video transcript
So this talk is a very general introduction to augmented reality, what it really is and what kind of applications people are building using this technology.
Since this is the first talk, so I thought I would start with telling you a little bit about where on the reality continuum does augmented reality technology lie.
So as you can see on the very left is the real environment, the world that we live in, the things that we touch, see, hear and feel. And on the very right is a virtual environment. So a completely virtual environment is an escape technology. So you put on a headset and you escape into an entirely different world. You cannot see what is out there in the real world around you, and you interact with virtual objects using remote controls and things like that.
In the middle of the two lie augmented reality and augmented virtuality. So they are very similar to each other. Augmented reality is where in the real world you add a little bit of the virtual and you overlay some graphics on top of the real world.
And augmented virtuality is where in the virtual world you overlay a little bit of real world on top of that. So they are very similar to each other and they lie in the middle of the continuum.
So talking more about augmented reality. What it helps you do is, there's a variety of different types of platforms and devices that you can use. And if you view the world through those devices you can overlay computer made graphics, static or moving, sounds, touch and even smell on top of the real environment so you see the real as well as the virtual. And it could be something as simple as a text notification to something as complicated as a medical surgery, or instructions rated to performing a medical surgery.
There's different types of AR technologies available, one is called marker based technology which uses a camera and a marker. That marker could be a bar code, a simple image, or something that's very easily recognisable and does not require a lot of computational power. So if you project your camera on top of that marker it lets you do things to it, like for example in this image there is a marker on top of the wheel and it lets you select what sort of wheel cover you want for that.
The next type is marker-less, which basically functions based on your location data, GPS. There's data related to your movement, gyroscope, accelerometer and things like that. It leverages mostly on the currently available technologies which are already there on your smart phone and other devices. Like in this image. This is a concept really, so if you are walking down the street it can tell you which is the best restaurant for a certain type of cuisine, for example.
Then there's projection based, so it uses light. It project light using a small tiny project build into the device. And it lets you interact with that light and then lets you do things in the augmented environment.
And the last one is superimposition based. If you look though the camera into the real world it lets you superimpose virtual objects on top of that, just like in this image.
So these are the four types of AR. And looking at some of the platforms that you can use. You can use mobile phones, just like in this previous image here you're using an iPad, so you can use mobile phones for that.
Or you can use smart glasses. Smart glasses, the most popular examples is Google Glass, which has recently been re-launched as Google Glass Enterprise. It's basically really a smart phone on your face. It doesn't have a lot of computational power, or anything. It lets you visualise some of the information you typically see on your mobile phone, right in front of your eye.
And then there is full fledged augmented reality heads up displays. They are the real AR headsets, and the most upcoming ones are by Microsoft, it's called Hololens, and there's one called Daqri, and there's another one coming called Meta 2.
What's special about these platforms is that they've got specialised technology and in-built cameras which can do depth perception and projections and things like that. And it has it's own in built computational power. It has it's CPU and GPU and everything installed on top of that. So they allow you to do a lot more than what a typical smart glass would allow you to do.
The thing about these platforms is that they're still upcoming. So it's not like you can go to your electronics store next door and buy a Hololens, or buy a Daqri. Unlike virtual reality, which is more widely available these days, so it's much easier to buy an Oculus Rift and HTC Vive as compared to the AR headsets. So it's more cutting edge, it's more upcoming at the moment.
Applications, I guess we have all heard about this one. So as you can see, though the mobile phone you are looking at the real world and then you are augmented that real world with Pokemons as you go. So Pokemon Go is the most popular AR application so far.
And then there's Ikea's catalogue. So if you download an Ikea catalogue on your mobile phone, it uses a superimposition technology, and you can overlay different furniture and all those things into your home and then see what you like and what you don't like.
Aviation, there's lots of different types of applications where this can be used. So Aeroglass is a startup company that overlays flight paths and instrument data on a pilot's vision. So fighter pilots in military have been using this kind of technology for a long time, but its only now becoming available for commercial and recreational pilots.
Design is another very useful application of AR where Augment is a startup company that creates plug-ins for existing design software like CAD and other 3D design tools. And if you use that plug-in and you print out your design and then you put your iPad next to it and you see that come to life in 3D. It's very useful in architecture and also interior design.
Entertainment is another one which is very upcoming. Seespace is a company that has created what they call augmented TV. It displays content related to the TV programme that you're seeing on the air right in front of it, or on the side, so that it doesn't obstructs the views.
Logistics. In 2015 DHL did a pilot study where the put some markers on their delivery boxes and the workers were using Google Smart Glasses to quickly scan what is what and where and where it needs to go. I think it cut their time by one tenth or something like that, so it drastically reduced the number of hours that workers had to used. Because of the success of their programme then other logistics companies like SAP and Smartbake also implemented this in their warehouses.
Education and training is an another very obvious application area. There's lots of different apps available for education if you go to the app store. This is just three of them and there's probably 30 or 300 whatever out there. So Elements 4D, Anatomy 4D, it lets you visualise what the different things are in 3D and you can interact with them as well. You could do that using your smart phone or if you have smart glasses.
Construction and manufacturing is another very obvious application areas. In this picture the man with the beard is wearing a Daqri helmet. So Daqri is the only company that is creating helmets that comply with safety requirements, industry health and safety requirements. What it would let you do in the construction industry is maintain a steady view of the building plan. It show you in real time what goes where so it's very easy to do alignments and things like that. It lets you go paperless because you see the display right in front of your eyes. It saves a lot of time when workers have to go and look at the document and then try and match that to the real world. It also lets you see where other equipment and other workers are, so it creates situational awareness around you.
There's other applications. The most obvious is games, but I won't go through in more detail here. Health care is another one. Marketing, travel. Logistics I've already talked about. Safety and rescue operations are other important applications of augmented reality. And there's a lot more out there.
In the end, just a few words about what we're doing here at Callaghan. What we're try to do is... this is a very new project, so we've only just started exploring it. And this is our vision, which I'm pretty sure will not come to fruition the way we emphasised it right at the start, but this is our guiding plan to the direction in which we want to explore.
So we are connecting augmented reality with the Internet of things. So the idea is, it's a complicated diagram, sorry about that. I'll try and simplify that. So there is an observation item right in the middle. This is for inspection and maintenance. If there's an inspector inspecting a machine, they would wear AR glasses and their tools, inspections tools like vernier caliper and torque wrench would be wirelessly connected to the computer and would pass on the data. And then they would also have wrist wearable devices where they can do inspection checklists and take notes and things like that. They would be performing the maintenance using these tools and we would be able to see the output using their AR glasses in real time. And what we actually want to do is C-AR, which is collaborative AR where two technicians can work together on one machine and see what the other is doing. It's kind of like if two people are trying to dig a tunnel they would know where the other person is so they are able to match. So that's the general idea of what we're trying to do at Callaghan.
Create your own reality
Alex Young from AWE talked about their web platform based product which enables a responsive web expansion to mixed realities as well as the ability for non-developers to create Virtual, Augmented, Mixed or eXtended Reality apps for the web.
Video transcript
Hi, I'm going to do a super quick overview of AR, VR, MR, XR all working in the web. Possibly the quickest one I've done, so forgive me if I generalize a fair bit and skip over some complex and big parts, but happy to chat offline if anyone wants to know more.
So I guess a little bit about us. We're a team of 14 who have spent the last 10 years or so working in the AR and VR space. So in 2009, we built the world's first augmented reality content management platform that enabled non-developers to create and publish content to the native AR browser apps like Layar and Junaio.
Metaio, who developed Junaio, were later acquired by Apple and are the team, and I guess the underlying tech, behind the ARKit that was released by Apple in September last year. We are headquartered in Wollongong just near Sydney, with team members across Australia, Indonesia, and Poland.
Well, we've always been web people and we absolutely love the web. So for us the web is really, you know, the largest platform in the world. It's got the highest number of developers and content creators and the largest audience. And it's really democratic so it's really open and inclusive. And when you look at how many people can use AR and VR now, for the HMD sets, so it's head-mounted display only apps, the end of last year it was about 26 million.
Mobile AR and VR apps was about two billion people had the capability to use it, so they had devices that could download and use them on. And on the web, you've got over 3 billion installed web browsers on different devices that supports using AR and VR content directly in the web.
So basically we developed Awe. So that's A-W-E. So it's an entirely web-based creation platform that enables non-developers to easily create image and location-based augmented reality, virtual reality, and 360 interactive experiences. And these can be seamlessly woven together to create rich, immersive experiences. And if you're a web developer, you can easily add your own CSS and JavaScript to extend those experiences further.
And we've done that to really, I guess, minimise the time, effort, and cost to create VR and AR and also because a lot of people aren't actually developers. So with the head mounted display only apps and mobile only apps, you've got to be a developer or hire a developer.
Can read through the steps there, but there's a lot of different steps to create those apps, a lot of time taken to do them. And generally you've got to do that each time for each different platform you want to target. With Awe or with web-based AR and VR, you literally create it once and it works across all of the different browsers and devices.
If you look at how to view it, basically people don't need to download and install any native applications, so it literally just works across smart-phones, tablets, computers, head-mounted displays, VR goggles, AR glasses, that type of thing by just simply clicking on the web link, URL. And it opens in the web browser on the device that you're using. And then the content automatically is adapted by the Awe platform to the device form factor and any specific browser quirks, so literally just works.
I'm gonna show you a quick video of Milgram's Continuum. And it illustrates the different modes and Awe working within a web browser. This was done a while ago, so note that where the image marker is shown or the fiducial marker, Awe now supports full image recognition and tracking in the web browser. So that means you can bring literally any image to life.
So here's an example of Milgram's mixed reality continuum showing four main modes that make up this continuum. From reality on the left hand side through to virtual reality on the right hand side and the two modes in between. If we look at what is presented to us, this is a virtual scene, so a 360 or VR type of experience with some virtual objects in there that you can interact with.
You can place it in to stereo mode so you can experience this in a head-mounted display as well. To the left of this you see augmented virtual reality, which is the same type of scene but with sensor data. So the camera view, in this example, is projected into the scene and the sensor data can be anything that you like.
On the far left, so showing now, is just plain reality, so standard view of the real world or the fleshy meat space we live in. And if we hold up markers there, nothing happens. It's just the real world. But if we turn on augmented reality just to the right of that, we can see that tracking is now possible. And the digital content is now overlaid onto the images and objects that it's seeing. And it uses computer vision to do this. But because this is just one type of sensor data, we can project this into the augmented virtuality scene as well.
So if we hide that, it allows us to use that to move the marker content around inside the 3D virtual space like what you're seeing here. And we can use other sensor data in this context as well. And if we go back to the full VR environment to the right of augmented rituality, we can see that now we've seen the four main modes all working entirely within a web browser with no plugins and it can work across smart-phones, tablets, head-mounted displays as the output channels with those as well.
So mixed reality is a really good term, it's a great term, but Microsoft did almost too good a job at marketing the HoloLens so many people now associate mixed reality with just HoloLens. So competitors in Silicon Valley, which were led by Qualcomm really, have rallied around a new term, which is extended reality or XR for short, but this effectively means the same thing.
XR has now been adopted on the web, so starting in a proposal initiated by Mozilla which makes the ARKit and ARCore tracking available to web browsers also. And now all the standards development of it that was going on with webVR has evolved focus on the broader webXR strategy. So we'll take advantage of that once it's stable, but this is likely to probably take another year or more before it becomes a standard part of the main browsers. Meanwhile you can use 360 degree media VR, location based AR, and image based AR like this in the Awe platform right now.
So I'm gonna show a super quick video that shows image recognition and tracking working inside the web browser. So this one's on an iPhone running Safari, and in this example it illustrates multiple images being recognised and tracked without having to rescan each image to initiate the content display.
So you just go to a URL, point your phone at the images, and this shows really that you're no longer bound to using those fiducial markers or images of markers around them. You can literally use any image you like. And here we are. Do there.
So the majority of our customer base is in the kind of education, cultural, and civic sectors, especially around that kind of k-12 and tertiary education. And we're really seeing the Asia Pacific region now starting to explore the opportunities around these technologies created and delivered in the web, so not just through native apps.
So thanks very much for having me. And any questions, my contact details are there. Thank you.
Virtual cities
Sean Audain from Wellington City Council talked about how the council uses its solid data core to deliver a Virtual Reality experience that informs and engages people in city planning.
Video transcript
My name's Sean. I'm from the City Council. And I am originally an urban planner, and this little story about technology is a little bit of the story of our discipline and how it's moving. One thing you should understand, planners don't do what you think we do. We don't control your lives. We don't change the colour of the city from space and have different types of buildings spring up.
What we do is basically nudge the city to behave in different ways so that we can get some sort of certainty to drive investment cycles, make sure the water flows tomorrow, make sure that you can get to work, all that kind of thing. And this is the way we have always communicated.
This is a simplified version. It's a lovely map. It's of Te Aro. And there is an absolutely overwhelming amount of information on that. It talks to you about everything from where future roads are going to go to how the ground shakes during an earthquake, to how high you can build, whether you can have a cafe on the ground floor, if it's going to be a park or not. And that's backed up by a 10.5 kilogramme document, which those colours tell you which bits to read.
Oh.
Sheesh.
Yes.
You said kilogramme.
Yes.
Oh, my god.
I gave up counting the pages so I just weighed it.
[LAUGHTER]
And then we started to communicate like this. So this is getting a bit close to kind of city we experience every day. So the part of the planning discipline I'm from is the part that measures cities. So what I was sent out to do was to understand how the city moves. What are the metrics of it? Where are the little footprints that people leave behind, without getting all creepy and big-brother-ish about it.
And so all this is telling you is basically how much space is used by cars. Everything in red belongs to cars, and everything that's not red belongs to us people. What's really interesting is when you work out that only a third of people in Wellington actually use their car every day, because that's not how the public space is divided.
And so what we started to do, we started to push through this reality continuum that you've heard about. The difference with us is we decided to journey through it over time. So once we had our three-dimensional city, we started to use it for different things. So this is from the Kaikoura earthquake.
All of us at the City Council have two hats. One of my hats is as one of the Civil Defence Intelligence managers. And what this is is showing you how we were bringing down 61 Molesworth Street and who could access what buildings from-- this was just high-level picture, which allowed people like Ministers to see instantly, ah, that building's a problem. This is why it's cordoned off. So they could understand the logic of it.
What it allowed people like me to do is work out how many businesses were affected, how many people needed to be re-housed, how many workers that it's going to take, how long it's going to cost, and what I needed to have my masters petition government for.
And then we started to move into full virtual reality. As an organisation, we are a bit weird. We do augmented reality too, and we did augmented reality first. Our first experiments were about four years ago. But at the moment, we're much more interested in virtual reality because it gives us more control. Being a planner, you're a complete control freak.
So this is what our virtual Wellington looks like. You can pop into it over there, if you like. We can control time. We can take the data from the city sensor networks, which are being deployed at the moment, and start to understand how pedestrian flows move through this. And what it means is we can take the vast quantities of data, which we need to communicate with our citizens, and understand the different patterns in the city and put them into a world where you're not concentrating on you where you are, you're concentrating on what's happening.
And that's where reality really becomes useful to us, because we can take really complex things like-- I believe this works-- things like this. This is the depth of bedrock, and the different shadings show you the angles. This is one of the reasons why different parts of the city endured more damage last year than others. What was essentially happening were the energy waves were coming into the city, hitting these different surfaces and being angled and focused upwards. It was sort of like a person with a magnifying glass and some ants.
We had a lot of talk about things like reclaimed land. To be honest, that wasn't as big a factor as people think it was. Reclaimed land is not a unified consistent thing. Some of those reclamations are made from the rock that was pulled down from underneath the monastery. Perfectly solid. Some of them are pumped sludge off the sea bed. Not so good.
And so what this allows us to do is to communicate these really complicated effects and situations between disciplines, which allows our organisation to behave as a single organisation. But it also allows us to talk to people who are not experts and have some hope of being understood.
And all of this gets pulled into this system. So this is our city in a box. We developed it with our friends at MEC. And what it is, is built on top of the city's data core. So being in public service, the drivers that I have are a little different from the drivers the private sector have. I'm stuck with permanence. If it doesn't work, I can't close down the company and start doing something else.
Unfortunately, there will always be a form of local government. And so what that means is we have concentrated on creating a good solid data core which we're progressively opening, which allows people to develop things on top of our data but also allows us to develop. And so we can deploy solutions like this relatively simply.
So in the past year, it's been used to engage on earthquakes, city planning proposals, help people understand what population growth looks like. At the moment, I'm using it to help people understand how things like water systems are inter-related and how that's going to drive their rates bill during the next 10-year budget conversation we're about to have.
All of this takes what was essentially a wonderful toy at the beginning of my career that caused servers to smoke, and it's turned into a really useful tool. And it's one of those things that unfortunately I can't really show you unless you hop into it. So after I've finished, come and see me, and we'll play with it.
Cultural connection
Brian Goodwin from I want to experience talked about the Virtual Reality product he created with Phil Bott through Te Papa's Mahuki innovation accelerator. Their product enables people to explore the world of passionate experts, whether they are explorers, artists creators, or historians.
Video transcript
I'm the founder and CEO of I Want To Experience. And over in the back there is my-- over there, is Phillip Bott my co-founder as well. Phillip and I have worked in the movie industry for the better half of our lives. I mean, we've been there for 15 years. We worked at Weta Digital for 10-- where we've worked on films like Avatar, Hobbit, Planet of the Apes, and our lives changed when we tried virtual reality. We knew that this medium was something very different.
So in early 2017, we formed I Want To Experience. We feel that virtual reality can change the way you see the world. It's often thought that its strength lies in immersion and its sense of presence and engagement. However, we feel we can take it a step further. We feel that the strength of virtual reality lies in the sense of intimacy.
At I Want To Experience, we immerse you, the audience, into the world of passionate experts. We describe our product as a behind-the-scenes Ted Talks, where you go into the world of these experts, where they are no longer on stage, but they're in their workshop. They're in their space where they get inspired. They're-- in many cases, you just want to see what they are like as real people. You ultimately want to connect with them. So what we plan to do, what we're doing is we're bringing hard-to-reach people and places to your doorstep, by virtual reality.
In the early stages of our testing, we were hoping for a few minutes of engagement, perhaps two or three minutes at best. And as it turned out, that a lot of people who were using our app wanted to experience-- our experiences pulled them in. They often stayed immersed for up to 35 minutes. We had a funny situation where the American ambassador, Scott Brown, came 'round, and he asked us to stop interrupting him while he was immersed.
So it was with great delight last year that we were accepted into the Mahuki lab at Te Papa, we were immersed in the cultural space. And there we learned how do you navigate the cultural world. Because we've been straight out of the film industry, it was a very fresh and different world to work within the cultural sector. So it was great to actually understand and mostly work out where we can add value.
Fast forward five months, we're now at early state-- well, we're now in the final stages of our first commercially available application, which will be launched at Te Papa in three months, which features-- no, in three months, in two months. Which features the award-winning artist Lisa Walker, where we go inside her world with the launch of the new art space that's going to be opening on the 16th of March. We're also in early discussions with Marlborough Museum and MOTAT about creating on site virtual reality experiences for them.
However, in order to expand, people and institutions need to adopt the use of virtual reality. Ironically, the biggest concern institutions have about the adoption of virtual reality is the fact that there is this lack of adoption. I mean, why invest in technology if it's not really massively adopted yet? And the biggest concern for manufacturers is the lack of content. However we're at the tipping point where companies are starting to take this seriously. Big companies like Facebook are investing billions of dollars in creating the hardware and platform required.
Behind me-- well, over there is the graph that illustrates what Goldman Sachs projects to be the projected sales of virtual reality headsets within eight years. And I mean, doing the maths, just shy of a billion means that that's one in every western household. However, I prefer to rather scope the problem to a rather smaller set and think about what's actually happening right now.
These are some of the institutions that are successfully integrating virtual reality into their space. Across the pond, we have Australian Museum in Canberra that's generating half a million dollars' worth of revenue showing virtual reality on site. What I like to do is rather look at other industries and see how those industries are going to affect us.
If we think back to the release of the first iPhone, that was 10 years ago in 2007. No one had a smartphone and now smartphones don't leave our side. It's only logical to assume that, because virtual reality technology is being folded into the next release of-- or the next Gen of smartphones, well, current releases of smartphones as well, included that, as it progresses, it's going to be part of our natural lives. This mass adoption's going to take place.
Exponential growth occurs when overlapping technology exists. However, right now, for the first time in history, we see a convergence of all these affordable technologies that are all coming together to allow an immersive experience to be available and streamed online. Prices are coming down, and while this technology becomes more affordable, it works its ways into the hands of consumers, we're going to be partnering with arts and cultural institutions to create compelling content for the audience. This is a small demonstration of how our product. Let's hope it plays.
[MUSIC PLAYING]
Great. So we're building a virtual reality application of mobile technology where you can step into the world of passionate experts from the comfort of your home. It's a curated family-friendly ecosystem where you can explore the world of passionate experts, whether they are explorers, artists, creators, historians. It's an interactive world. As you can see, you look around and as you look around, it's very simple. It's very-- it's designed so that it can be easy to use.
But essentially, what it is is the way that we designed it is that all of the experiences are interconnected. All of these experts are creating a fabric where we can learn to explore the world through the eyes of an expert, but also through things that inspired the expert. So as you saw in the previous video, we were moving between a Maori carver, who's very well-respected within the community, and then we were also back of house within Te Papa. And those two worlds share commonality.
What we want to do is create a virtual world where the user can actually explore these two worlds. What's nice-- what we try and essentially do is we allow our users to actually follow these individuals that create content, and you can find out what they're up to in the real world. It will be a basically a very similar to Facebook-type environment, where you can actually publish content and ultimately keep up to date with what they're up to. You can obviously choose to share this content with your family and friends.
We will also be providing the tools to allow customers to take control of the creative process. A simple online authoring solution to add content themselves, so they can add content to preserve their stories for generations to come, to create a rich and diverse range of experiences for our audience.
Everything is done via the cloud, so this gives us-- or this gives our customers access to their content from anywhere in the world. All of this user behaviour gets integrated into Google Analytics. So we can grow the content based on user-engagement methods.
Ideal customers would include museums, business and enterprise, and cultural organisations. Within the museum space, we found that it was ideal because virtual reality fits like a glove. One of the mandates of a museum is to extend their offering to the community, so with virtual reality you can do this. You can break down the walls, and you can reach parts of your audience that never attend museums.
Business and enterprise can use our application to show their point of difference to their customers to try and express what makes them unique within this platform. However, at our core, we want this medium to benefit the range of-- sorry. At our core, we want this medium to benefit the institutions to preserve culture. Because by leveraging the medium's strongest quality, that being the sense of intimacy, we believe that we can create empathy. And A lot of people feel that virtual reality is the ultimate empathy machine. And when you look through someone's eyes, when you have the opportunity to look at the world through someone's eyes, you can actually have a connection with them. And for a brief moment, that connection can actually allow you to actually bridge cultures. And that's why we feel as though this is an undeniably powerful tool.
So our road map moving forward is finding organisations who are interested in using virtual reality. And we want to, essentially, come on board and assist them from the creation of the footage to the deployment of the actual content within their space, on their hardware, and specifically with the aim of empowering these organisations to create content themselves so that they can use our platform to share their content. So we thank you for your time. And we look forward to talking to you more in person.
[APPLAUSE]
Virtual training
Ben Knill talked about the Show How Virtual Reality training platform that delivers first hand learning experiences, particularly when things are really hard or expensive to simulate in real life, for example for aviation staff, probation officers and medical professionals.
Video transcript
[Show How] It's a training platform and actually shares quite a lot in common with these guys, the things we're doing. We're based out of Projector, which is a VR space, on Courtney Place, which is lots of different things going on, lots of fun. If you want to come check some things out, it's a good place to come.
And we're very much focused on the training space, and it's a platform for other people to be able to make training resources. Organisations make their own training tools mostly for real life experiences, or things leading somebody to experience what the situation or scenario is going to be like.
So we've been piloting for about a year, and now we're into the production stage of the next version, that's all singing and all dancing. Lots of just going around and seeing how people could use this kind of technology, so we've been inducting aircraft engineers on aircraft, and showing safety procedures.
We've been doing helicopter landings so the defence force have an issue where you have to train, staff, like baggies or what it's like to unload a helicopter. Helicopters are very expensive, very hard to come by, half the time they don't turn up. So it's a way of practicing what happens under the helicopter the procedure we have to get through, which can just be done on mobile phone or a headset. It saves them $25,000 an hour and keeps another helicopter in the air.
Things that get taught with corrections, probation officers, they have to go into new houses, they have to keep safe, check for weapons, deal with dogs, deal with other people not belonging over there, they have to check if somebody is not supposed to be drinking-- all things that we have to go through are currently trained with a presentation like this six months before they go out into the field and we give them a piece of paper, check the dogs, things like that. It's like anything-- until you've done it two or three times with somebody, you're not very confident, you're never going to remember everything you're supposed to do. So it's a way to practise, to go through a process-- to get to the door, shake, check for dogs, if somebody goes to let you in, make sure you're the person to go in behind them so you can close the door and you can check the lock-- all the little things that are really hard to remember unless you do it in real life.
We've been doing checking for asbestos, going around looking in houses, checking for asbestos and the places you might find it, and showing what it looks like. We've been doing settling students, so what it's like for an international student to come to New Zealand, all the things that you might not think of. Things like getting on a bus-- apparently it's a big issue-- paying for things, just all the different interactions with locals that people might be scared of, just ways to simulate and experience that in a way that's safe and they can do themselves.
We've been doing health and safety inductions in places that are really hard to get to, or somebody has to show you. It's very time consuming, very labour intensive. Nothing's recorded-- that's been a big thing, is often at inductions, legally you have to do it, and you're supposed to be shown everything, but if somebody forgets it, or they changed their mind on what's important, or they didn't understand it in the first place, so you can get a completely different induction to what legally the organisation thought you were shown. So it's a way of recording that, yes, we were shown it, so it's 'these were the fire alarms', you click on them and acknowledge that you've seen them.
And then we've been doing quite a lot in the medical training space. So again, places where things are really hard or expensive to simulate. Things that people need to practise are currently very academic, but a way of seeing evolve in front of you and making decisions on the fly, based on the information that you might have in the room.
So e-learning-- in education it's quite dry, it's quite old-school, it hasn't really changed that much over the years. It's quite fertile for someone to come along and change things. And typically everything's been very paper-based, classroom-based, lots of forms, lots of just ticking to show you've done something, being assessed by spending a certain amount of time in a room without actually having assessed what you've done in that room. And then when-- the best way to train someone is to show in real life, is to take them to this room and put them there, have somebody dying in front of you and trying to fix them, but obviously that's really expensive and difficult. So we're trying to create something that's somewhere between real world, and somewhere between e-learning as it is, taking a lot of the theories of both and putting them together.
VR's really good for this kind of thing. Some things people have said, like, it's really good to have somebody stuck in there or they can't concentrate on something else until they finish the course. Especially dealing with millennials who like playing on their phones, it's having something that can play on your phone and then assess. Things that are best shown as an experience that's really suitable for VR. There's lots of things being done that don't need to be done, and people are trying things all the time now. The training seems to be a really good use for it.
We're using, like a lot of people, mobile devices, low cost headsets, cheap 360 video cameras just making this stuff really accessible. Things that people can produce themselves quickly, easily, and not too expensively.
I'll show you this medical training. So this is Waitemata DHB. This is junior doctor training for situations and scenarios where typically the doctors will do this training, but they'll get five doctors off the floor. They have to get five locums in, put them in a room, and practise these scenarios. Very expensive, doesn't happen very often, it needs to happen more, so this is a way of replicating that.
[VIDEO PLAYBACK]
- Oh hi. Welcome to Waitemata DHB Immerse, an interactive medical training experience. Take a look around. Go ahead and choose a scenario. Good luck.
[BEEPING]
- Thanks for coming. He's 63 years of age, with a history of ischemic palsy.
- OK. Let's do this together. Let's allocate a task so we can get the important things done. Becky, can you take control of the airway? Lee, you operate defib.
- Visually.
- Sarah, can I get you to coordinate chest compression and then keep swapping. Yes, that's good. Is everyone happy with their assigned tasks?
- Yes.
- OK. What would you like to do now?
So we have multiple choice answers, but these are going to voice next, so you can do your answering with the voice.
- Correct. Ventricular tachycardia VT is a shockable rhythm and you want to deliver a shock straight away before continuing CPR.
- OK, we'll need to deliver a shock. Sarah, have you prepared the changes during compressions?
This is a different scenario.
- Hi, I'm Andrew. This is Dr. Carver. Could I see the CT?
- Of course.
- Thank you. Take a look. How would you like to begin?
And these are interactive things that you can add in, drop in, so lots of things you can look at, click on.
- Now let's debrief on what happened.
And we finish with debriefs for every scenario, so you go through what happened.
[INTERPOSING VOICES]
- --67-year-old female who was admitted with a chest infection. She'd been complaining of palpitations, and her ECG showed that she was in AF with a fast ventricular response. We assessed for signs of decompensation.
[END PLAYBACK]
So that's one of those five modules in that training experience. Typically you'll go through a situation and you'll be presented with multiple choice. There's the right answer, which is perfect, and go through to the next stage. There are less right answers where it's kind of good, but it's been the old way of doing things
where they'll explain to you-- it's kind of right, think again, it's the wrong answer, more adrenilin, or something. And there's really bad answers where it all goes wrong and they die, and you go back to the start. [LAUGHTER]
So we have multiple choice answers you look at.
The next version is you just say the answer, so we're using some voice recognition. And also with doing more things like they want to in the debriefs, they want you to practise explaining what happened to that someone else. So you use your voice to explain your process, what you're thinking, and that's used in the assessment especially for where English is your second language, your ability to be able to communicate with other doctors or other departments and explain what problem is.
So there's been a study. We've done some trials with this piece of work, and it's going through another study. I think it's about 30 doctors have gone through it. They're all very clever people, they adopted this quite quickly. They are a particular market, but it needs to be available to all kinds of people.
And really what was quite useful with doctors is that they are very academic. They have been to university for a long time. They're just finishing this process, they're just out into the real world, but the thing they face is that unless one of these-- unless you see an anaphylactic shock, in your doctor training, the first time you see it is in the real world, where you are the doctor on charge, and you're having to remember the training. So the idea ways of practicing what would happen in that situation if you don't see it in your apprenticeship.
And we had lots and lots of feedback, lots of interesting stuff that came out, and it's really helped design the next phase of this product. But lots of things was around confidence, like people feeling they're confident, that they've seen what happens in this situation, they have an idea of how it could play out, and feeling like they go out to the wards and they're more able to deal with the situation. And lots of things that people thought they knew it really well and it turned out they didn't, so a way of testing that's very different from the standard way of testing which helps you think more than an exam.
Lots of things about knowledge retention-- so seeing the result of your answer, what happens when you ask the nurse to go perform something, how long that takes, what the impact's likely to be, what the process is all kind of helps with knowledge retention, because it's not just a tick answer, it's seeing what the result's going to be. One thing that I think-- all over the training industry, people are trying to get to is the idea that you learn on demand, so you learn when you think about it, you don't get packed away in a room like this for a week, pumped full of information, have a lovely time, and remember like 5% of it. It's things where you have a little 10-minute course, you're thinking about what happens in anaphylactic shock, and you practise it again. So we had people saying they were doing it on a plane. The thing is it's almost fun. If it's something that you're interested in, it's your work, it's an experience that you want to practise, it's almost like a game, it's fun. So trying to get into the space for people who these things where it's entertaining.
So this is all going into a new platform. Loads of the feedback that came from the things that we thought were going to be important. We thought it was going to be about health and safety inductions, but it turns out that's not such a big field. Situation training, scenario training, people skills, this is all become obviously more important. And so the platform that's very close to being finished is around defining learning objectives, and what you need somebody to know.
The app helps you record all of the content, so video, audio, photographs, whatever is the most appropriate for the thing you're trying to train. Then just drag and drop into a little framework to make a course, put in some tasks. So typically, tasks will be a multi-choice answer, saying an answer, clicking on something in the room, something that assesses your ability to do that. This then gives you your results and we integrate into learning management systems and different systems, so that you can pull those results in and access them to see what people have learned. And also just basic inductions, basic things-- just getting an email, to say this person did this test, and however people want to get this information.
The next part is also kind of like "pick a path," so you can have lots of different situations, like if we have like a right answer, great, a wrong answer, you're dead. Now we have wrong answer, so it's a different scenario, and that could play out, and you can get there eventually, it might be OK, but you have to do some more work to get there. Or it could be the right answer, and you skip straight through it. Or that you can go back to a scene where something happened. So more like trying to replicate what happened in a real life scenario.
And then there's two ways of looking at it. Some organisations need things to be very secure, so the correction stuff we can't show, the defence force stuff we can't show. So it has to be locked down, internal. We can put it on to their their own servers, so it never leaves their network.
And then some things we're making very social, so they go into a Facebook app as well, so you can do courses on Facebook, make things public. And also like a store, so if some people who want to make courses and sell them to organisations like the little modules so people can create their own training resources and sell them on. That's it.
Experiencing possible futures
Pia Andrews from the Service Integration LabPlus team at the Department of Internal Affairs (DIA) talked about using Virtual Reality to demonstrate possible future service delivery scenarios where government could play quite different roles in a person’s life.
For a more accessible video experience, request the YouTube HTML5 video player
Video transcript
So I'm going to talk about just two things. We were hoping to have something for you to experience today. But I'll get to what we're doing in a moment. But, also wanted to just share with this group some of our early analysis about the different categories of use of VR, just broadly thinking so that particularly those working government, but indeed anyone, when they're looking at how they could engage with these technologies as opportunities to sort of think about it in a categorised way, rather than-- there's a lot of use cases out there, and we're trying to analyse and understand what they mean and where it's useful and where it's not. I'm going to share some of those insights as well. But the other thing is that this room is full of amazing people. So thank you all so much for coming. And I know a lot you have a lot of stuff to share in this space as well. So when we break to experience stuff, please share your experiences, your case studies. We'll send stuff out with all of this material. But if you have particular things that we can point to and reference people, and those kind of things. Let is know. We want to give you a full range of these things for this session.
So, I guess the first thing I want share was, I guess, some early analysis from us. I think that all comes up roughly. So the key thing there is there's a couple of key use cases, I guess, that we're sort of seeing. The first one is that concept of emulating and experimenting with the physical world. We saw a lot of that with the Wellington town planning does a lot of that, obviously.
The learning to drive, fly, surgery, sort of examples and the one we just saw is a good example of that where physical world is being either emulated for the purpose of engaging with it for the purpose in modelling. Or for the purpose of training or indeed for the remote work example from the mining. So in Australia, this has been around for a little while. But there's like a dome you go into, and it's a fully virtualisation of remotely working in very heavy machinery in western Australia in real time. Of course, that needs very fast broadband, and I won't go into that issue In Australia. But that idea of experimenting with an interacting physical world.
The second one that we've really seen a lot of emerging is a design aide. So we're seeing a lot of agencies using more virtual reality worlds augmented reality around helping design new services, helping them test new designs with users helping them sort of just experiment with things before they put into practise.
Now the software, of course, we can do rapid prototyping and testing of users but for physical spaces less so. So there's a good case study of an agency that's used virtual reality to create many different prototypes of physical spaces for Customs in this particular case. So they could test those with users and see what worked. So being used for design to test and experiment with users.
There's also the presentation of user experience. So IRD have done some incredible work around user research about basically businesses being able to engage with the business systems in New Zealand and helping their senior executives understand just how complicated it is. So I want to do this. What's my option? Here's a whole wall of options, oh crap. Oh, I guess I'll try that one just to help people get that empathetic experience of how frustrating it is as a small business trying to actually interact with government.
So as a design aide, the third one's around, obviously augmentation of the experience. You've seen on a number of examples of that. I'll share really little one with you which is, I think, a particular one from New Zealand as opposed to Australia. You have a lot of imported cars with various languages on those cars. I got one of those imported cars, and it's in Japanese. I can't get it out of Japanese, which is-- my husband speaks Japanese but I do not. I only have Chinese.
So my favourite VR-- oh, it's AR app right now, is real time translation on the fly. So I can just hold up my phone, I can see what my car is telling me which is kind of important at times. So that real-time language translation-- we're actually are on the cusp of having the babel fish which is kind of exciting. But for shopping, for emergency response, for all kinds of stuff. Augmenting, that's AR not VR, generally speaking. Although, I think there's a couple of examples. But generally speaking, it's about overlaying information upon your actual real world experience in real time.
The fourth one's around telling stories and empathy. Now we just have to be really careful with empathy as well because, obviously, seeing from a first-- I mean, I don't know how many of you are gamers. I'm a life-long gamer. Just because I've played Doom doesn't mean I know how to shoot a demon.
So we've got to be kind of careful because it does give a certain amount of empathy. It does give a first-person experience. But people still bring their assumptions. So we still need to help people understand that empathy isn't just walking in someone's shoes. They also need to understand circumstances and privilege, and all those kinds-- and biases, and all those kinds of things. But it does help to give a person the first person experience.
The example of how we're playing with virtual reality that I'll share with you in a moment is one of those examples where we are using virtual reality as a way to give a first-person experience of alternative futures because we're nutty that way. But there's other examples around movies 2.0 around interactive movies on Netflix now. And if you've seen them, there's one for my kid with Puss in Boots where you can actually-- it's effectively a choose your own adventure in movie form, OK. Cool. It's getting more and more interactive. And, of course, service delivery and new channels.
As Alex pointed out in the earlier presentation, the number of people who are sort of using this every day, is really, really, very small right now. But we've gone from web, we've gone through the mobile days. And this is just the next series of channels. So particularly for people working in government but, of course, for anyone. But particularly in government, we need to know about what the next channels are and how we respond to this channels. And we also need to make sure, this is one of my great fears, that we don't just repeat the mobile app lesson. Let's not go and build 30,000 apps at x amount per pop and then have to update them on each individual device. Try and think about how we could take a platform approach and jump straight to the responsive web equivalent of mixed reality. So there's not, I think, an answer to that yet. But it's certainly something to take into consideration. So there's a couple of lessons. I'm sure other people will have other ones as well, but they were just some of our early analysis which might be useful to share.
Our experiment is this-- the main problem that we face working in service innovation and trying to, from a service innovation lab, is when we talk to people about thinking about the future and about planning for users and planning and designing different ways and engaging in different ways and co-designing. And all the different things that we can do in service design, in the back of a lot of people's minds, they're actually imagining just a website or a mobile app as the thing that gets delivered at the end of the design of the delivery.
Now where are we going into the future? What does integrated services look like? What does different futures look like? What do different modes of delivery look like? What are different options well into the future? Because one of the challenges is if you continually iterate away from pain, you're not necessarily heading towards anywhere meaningful. So what's the meaningful place that we want to get to?
So we are doing an experiment. A very, very experimental little thing where we're taking all of our analysis around emerging tech-- emerging trends including societal trends and democratic trends and other trends. And a very distinctly Kiwi and Maori perspective which of course we have other experts, not me, to understand, being a foreigner. But to try to say, well what's an optimistic future that we can predict? What's a 50, 60, hundred-year view of where we want to be?
So rather than saying, well, technology can automate a huge amount of jobs, we could be saying, well, technology could automate the chunk of jobs we don't want to do. And we could actually design a different life. What if rather working 40, 50, 60 hours a week we plan as a society to work 10 or 20 hours a week for more money. And then actually have more time for education, for invention, for art, for all the things that make a society great.
Why don't we use technology to usher in a new renaissance just for fun. So what we wanted to do was to start that conversation and at the same time experiment and get our skills up with VR and to a lesser degree AR. And at the same time to explore with some of our colleagues in government what an optimistic future would look like and lay down the gauntlet and say, cool, you're not going to like all of this. You might like some of it. You might like none of it.
But what's your ideas? And actually start that conversation so we'll be hopefully being part of a few initiatives around this sort of exploration of optimistic futures. This is just ours.
So we included concepts around users being, or people being more in control of their lives and their environments and the choices around them. And that was a big part of it. We looked very closely at the concept, which is weirdly new, around personal use of AI and data.
I mean, a lot of people are starting to see personal helpers and stuff turn up on their mobiles. But they're provided by a company, or they're provided by government. And they're provided with the needs of those organisations as part of their design, of course. Why couldn't you have such as you had in, I mean it's all science fiction,
but in Diamond Age Neal Stevenson where you have a book that opens up or a helper that actually tethers to your needs-- to your personal needs-- what makes life better for you but then interacts with the government AI or interacts with companies-- interacts with things and helps you actually navigate your world. So we played with that as a concept. We played with a whole bunch of stuff around open government and new models of democratic participation because it's the sought of stuff we love. A set of emerging techs and trends.
But we also looked at some-- there were a lot of considerations that went into it, but it is just kind of for fun. And it's a discussion about them.
So anyone that want-- this is a very, very early demo. It's not fully fleshed out. It's actually due to be launched on the 21st of February. But we thought we'd share it with you. It's got some creepy stuff in there for anyone that's not aware of body hacking, you're about to get very uncomfortable. So it's got lot of concepts in there.
What we will do is be releasing the VR as an app, unfortunately, even though I want it to be a platform. As a video for people who don't have access to VR tools. As a more low fidelity mobile app. But we'll be releasing all the code so anyone else can expand upon it as they will. But the other thing we'll be doing is releasing the script and the information around it to say, here's how we got here,
here's what we've explored in here. And here's where this is all coming from. So that will all be made public. Wherever it's useful, great. Wherever it's not, please tell us. And we expect you to kick us when we get it wrong.
So the thing-- I'm going to show-- and again, it's not fully fledged out. It's very, very early days. I don't know if I have sound. So let's just try this out.
And we actually did our first round of user feedback testing yesterday. And so there's a whole bunch of feedback that we have that hasn't gone into this yet. But it will give you an idea, We going to have sound? Maybe.
OK, hold on. Let me see if I can just shift the sound. I probably can't. You probably can't hear it then. All right, I'll just play it through, and I'll just talk it through, roughly. All right. OK, so the idea here is you've got a-- you start the experience. You got a little helper who's actually on your wrist if anyone's played Fallout-- very similar concept. But you have a little helper who actually helps you navigate who is talking to you-- telling you about options.
In this scene, we're actually hearing from 138-year-old economist sharing lessons through the medium of dance because, why not. But you're about to participate in doing your education. There's a whole bunch of artifacts that will come through there. But really this is just about exploring different models of education. A really great book if anyone hasn't read it-- Ready Player One has a whole bunch of concepts around education being done through virtual reality and the concept of high quality stuff for being virtual reality. Show that's worth checking out. So that's kind of is a bit of a haptip to that.
You go through your portal to go into your business space where your business has been automatically generated based on its growth projections and behaviour a new grant. And so you get to decide how you want to spend that grant. So there's different buckets here. One around sustainability-- one around sort of, I think, advertising and business sort of growth. And one around staff and improvements to the environment for your staff.
And so the idea is that you're just sort of getting these different ways of, first of all, getting information and getting grants. And the idea of having a lot more stuff projected but then being able to interact with those things in different ways, so that's that.
There's lots of-- Oh yeah, and he's also telling you the impact of doing that will be this. The impact of doing that, so this. And so you actually start rolling back decisions. So the impact of throwing up lots and lots of the billboards might be you get an immediate trend-- spike in sales. But it has a long-term negative impact on your local reputation. So is that really what you want to do? Really getting to that idea of getting more projection and modelling, around you making decisions in your own life.
So we're moving to next one-- next scene in this one because it knows what your mountain and your river are. And you've got the ability with your tax to actually spend, maybe, 10% of your tax directed, however, you want it be spent. OK, I want [INAUDIBLE] and my text to go on employment programmes for on education or whatever I see the idea of direct rather than being consulted on where you want tax it to be spent, actually, directly being able to direct a proportion of your tax.
And in this case your river and your mountain are your highest priority. And so your helper is saying, well, your river has got a spike in issues around it. Would you like it to go your amount of money this month to go into investment into your river to help it keep clean.
This one gets freaky. So this one is based around health, and we've spoken to a few people from different agencies on this, by the way, as well and a few people from outside.
But it's really, like I said, just a discussion starter. In this one your helper is interacting with your GP and also with international data around health issues. And it's predicted that you, well, it's got analysis around some issues you've had with one of your arms. And so you've got a choice-- would you like to take the culturally sensitive way and actually just get a replacement of the arm or just get surgery?
Or would you like to go the slightly different way and actually replace it with a tentacle. And so then it gives you that kind of interaction.
There's also the concept here of food actually growing on the outside of buildings and not being sort of miles away. So there's a whole bunch of, sort of, intricate concepts in here. So in this one you've chosen to take the tentacle-- fine. So with the tentacle you're going to play. Yes, so with the tentacle there's a lot of interaction throughout all of this.
But you're picking off the fruit, you're sort of, moving onto your civics class and getting that feeling of what that would look like. So I think that's pretty much the end of it. There's another wrap around back to education where it talks about-- what do they call it? It was very clever-- it was show one-- share one or something. So the idea that everyone actually contributes part of their time in civics.
And in this particular case you've chosen to be the teacher in your school, in your education setting, for the week to share your particular experience with the class. So there's a lot of concepts in here. It's going to be quite interesting and hopefully spur a few conversations. And we look forward to seeing some the other optimistic features.
So in our case we're using it to spur a conversation-- get a bit of a first-person view about what the world might look like to hopefully create a bit more empathy in our user's, public servants, trying to encourage them to think about the world in slightly different ways. But also to spur some of these conversations about what the world could look like if we were to project it out rather than just slightly better versions of the now. That's about all I have. Thank you very much. I'll pass back over to Nadia.