Gierad
The Internet of Things (IoT) has succeeded in turning simple homes into futuristic homes, by interconnecting smart devices that can collect and exchange data through embedded sensors. Now, researchers at Carnegie Mellon University’s School of Computer Science would like to take this interconnection a notch higher. Instead of separate sensors for each smart device, they envision building a universal one that will tie all those individual sensors into one super sensor.
The team — composed of Human-Computer Interaction Institute (HCII) Professor Chris Harrison and PhD students Gierad Laput and Yang Zhang — refers to it as ‘Synthetic Sensors, a sensing abstraction project that allows everyday environments to become smart environments, without the use of cameras.’ Eliminating cameras from the picture is an important aspect of the project as the team recognizes that the invasive nature of cameras make it impractical to incorporate such in environments where security is essential.
So far, they’ve already built a prototype that has 19 separate sensor channels capable of detecting data such as color, direction, light intensity, motion, sound, speed and vibration. It can be plugged into an electrical wall outlet, which means it won’t require any battery. And, it’s more visually pleasant compared with having several sensors attached to different objects.
Instead of each device requiring built-in IoT technology, the all-purpose sensor can simply detect what a gadget is doing. To make this possible, they provided the sensor with an initial set of data, specifically, ‘real world examples of what 38 different devices and appliances sound like when in use’. From this, the sensor should theoretically be able to identify what is happening inside the room, based on what it already knows. For instance, it should be able to differentiate between a blender that is running and a coffee grinding machine that’s turned on. And over time, through the use of machine learning algorithms, it should be able to do this with better accuracy.
As Professor Harrison described in an article published by the University, “It’s like a little AI. It only knows about the things we’ve trained it on. Every time we give it something new, it gives a best guess based on the data we’ve given it.”
And the results are promising. The team claims that initial tests conducted showed almost 100% accuracy. Which is why they have now proceeded to the next stage of their research. This includes studying how sensors located in different rooms can work in tandem to develop better accuracy, as well as how a sensor will be able to alert users that there’s been a change in the room. For instance, more than just recognizing that a paper towel is being dispensed (1st Order Synthetic Sensor), it should also be able to prompt a user via text or an app that the paper towel dispenser is already empty (2nd Order Synthetic Sensor).
If they will able to figure those things out, the path forward for their synthetic sensors will be clear as they can be deployed in different facilities including homes, schools, hospitals and healthcare facilities.
The research was recently published online. You can also see how the system works through this YouTube video.
Leave a Reply