The Coolest Tech I Saw at CES 2020

parallel-reality-header.jpg

It was a brisk Wednesday morning in Las Vegas and we were packed on the shuttle bus to the convention center. Since I was one of the last ones to board, I was positioned at a table facing the rear of the bus. As we sat in traffic the man across the table from me struck up a conversation. It was a pleasant and rather generic conversation. I learned he was from Israel and had an energy monitoring company that would provide feedback, via a mobile app, related to the energy consumption of your home or business. 

At some point during our conversation he let me know that every Consumer Electronics Show (CES) was the same and that he really had not seen anything in years that was new or overly impressive. The previous day I had attended the Delta Airlines keynote and something I heard during that knocked my socks off so I figured I’d throw it out for consideration. I asked him if he had seen the parallel reality display yet. To which he replied, “No. What is it?” After I spent a few minutes explaining the concept he smiled slightly and said, “Ok, yeah. That is something interesting.”

A (not so distant) future vision - parallel reality

During keynote presentations at conferences you are often exposed to grand visions and what can sometimes feel too good to be true. That’s exactly how the Delta Airlines keynote was progressing. Fanciful visions of what travel will be like in the not too distant future. Suddenly, in the middle of the presentation they brought out Albert Ng, the CEO of Misapplied Sciences, and he began discussing the concept of parallel reality displays. 

At a very high level, parallel reality allows multiple people to see personalized content on a single display. The use case they discussed was the big board at an airport. Traditionally, if 20 people were standing in front of the board, they would see every arrival and departure listed and would have to search for their particular flight. With parallel reality, each person would only see details for their particular flight and nothing else. I was immediately intrigued. 

As the keynote progressed it became clear that this was a pretty interesting technology but surely it was years off in the future. They then informed the curious audience that not only can this be done, but a pilot will be going live at the Detroit airport this summer and we could get a glimpse of the technology at their booth. This of course was met with applause and a few gasps. Hey, it’s a tech conference. We get like that sometimes. 

The experience with the technology

Later that day I found myself at the Delta booth eager to see this new technology with so many questions racing through my head. Were they using face recognition to figure out who you were? How did they render unique content using the same hardware? How did they know who you were and where you were in relation to everyone else? 

Let’s start with the basics

In order to understand the display technology they first brought four of us in to stand in spots on the floor looking at the screen. We were standing about 4 feet from each other, positioned in a slight arc looking toward the display. As I stood there I could see the screen displaying “Seoul”. As I moved to the spot to my left, the display changed to show “Paris” and then “Tokyo” as I continued down the line. 

We were then told to look at the large wall of mirrors behind us. These mirrors provided an array of images and were simply reflecting what the parallel reality display was projecting. Since I was positioned at the right height and position within the array, I was seeing the city names. If I had been higher in the field of view, I would have seen flags instead. 

screen-array.jpg

Mirrors installed opposite the parallel reality screen reflecting the content displayed. Here you can see how the display is outputting different content depending on where you are standing.

Ok, so that made sense. They were rendering custom content to each position. Although, I still had plenty of questions as to how they were rendering different content with the same hardware.

Let’s see how it works with an actual use case

We moved to the next room and this was more demonstrative of a true use case. We scanned our fake boarding passes upon entry and moved into the open space. There I was greeted by a large display with my personal flight information.

In the video below you can see that as I moved around in the open space, I could see the sign re-rendering in order to show my particular details. 

I asked the person working the demo how the tracking worked and they explained that it was using an overhead camera to track the blobs of the people in that space. So, in effect, as soon as you scanned your boarding pass at the beginning, the camera above would track that blob as you moved about. There was no facial recognition. I also asked if it was a LiDAR camera that was being used to track and they explained that it was not.   

parallel-reality-camera.jpg

The small white camera at the top of the image is mounted above the viewing area and tracks people/“blobs” as they move. This allows the display to know what content needs to be rendered and at what angle.

Now let’s see if we can fool it

Given this new found knowledge, and being a technical person, I immediately tried to fool the system. Unfortunately for the man standing near me this meant I was standing really close to him in order to see if I could hijack his blob.

In the video below you’ll see this happen as my display starts to show details for his traveler (“TET”). It also happens later in the video when I take over the blob of another traveler (“TRACY”). While I did have to get rather close to the other person, this is something to be aware of with the technology, especially when deployed in crowded spaces. 

My questions were finally answered. The system uses an overhead camera to track blobs of people who were identified using a token at some point prior in the process. As the blob moves in the field of view, the display renders content for that particular position. For more information about the technology you can check out the Misapplied Sciences website

Ultimately everything is like the movies

While the rendering of the content is a bit clunky, this is only the first iteration and there’s no doubt that this is very innovative technology that has tremendous potential. How far are we  from the Minority Report experience of personalized ads based on biometric data served up as we move through a mall? The video below gives us an idea of where we might be headed.

Sure, there’s a bit of recoil when it comes to invasive experiences such as these, but there is a lot of upside as well. The technology is something that invokes fresh thinking and makes you wonder about the possibilities. This is the sort of technology that ultimately serves as a spark of technical inspiration - even when it’s a grizzled veteran of CES hearing about it for the first time on the morning shuttle.

Previous
Previous

5 Ways Face Recognition Has Transformed Over the Last 20 Years

Next
Next

Quick Takes from CES 2020