Just when you thought you understood cloud computing and you knew that human ‘gesture recognition’ was the natural next evolution after ‘touch’, along comes contextual computing. This approach, also known as contextual-aware computing has actually been around for a while and is a combination of software application development, hardware, networks and service-based functions — all these factors come together to give devices a level of situational intelligence based upon their surroundings at a particular time and place. In other words, it’s like computers getting human senses.
We used to think that ‘situational awareness’ was something that only humans, animals and other living creatures had the ability to work out — this is no longer true.
The human big data computer
If we are cornered down a dark alley or we find ourselves in the wrong bar at the wrong time of night, our senses start to compute an analysis of the situation and our brain starts to make decisions based upon our perceived environment. If we find ourselves in a corporate environment with a strict dress policy or behavioral code, then, equally, our brain starts to make decisions based upon our perceived environment.
We humans are good at working out what to do based upon where we are, what time of day it is, what other people are doing around us and what the most likely outcomes are for the situation that presents itself to us — computers, it turns out, are also quite good at this.
Humans are extremely good at federating big data from a multiplicity of sources and classifying information into blocks upon which we can make contextually aware decisions. Computers obviously have greater analytical powers, but find it harder to federate and classify data until they have learned how to. As we know, machines only know what we tell them to know, initially.
But things are changing in devices are getting really very good at knowing where they are (GPS and location services help here). They also know what they weather is like (from the Internet) and they can even start to now integrate information that comes from cameras and sensors. Where we go next is the level where software application developers can start to enrich our apps with more context when we (from a privacy perspective) are happy to allow it.
The next something-as-a-Service
Flybits and its Context-as-a-Service cloud-based solution is playing precisely in this market. A developer working with these tools can enable an iOS or Android app with Flybits in 15 minutes and begin adding contextual elements to deliver a more customized experience.
Flybits lets developers take as contextual data anything that can be sensed, imported, or integrated and provides a unified way to manage and use all of it. The built-in context sources include device settings, user behavior and preferences, weather and other environment data, information from cameras and other industrial sensors, social network connections, health data from wearables etc.
Location is usually one important element of context, so Flybits includes a full set of location-management and push notification services that use GPS, WiFi, cell towers, NFC and beacons. Flybits is hardware-agnostic to sensors such as beacons and is designed to work with any existing infrastructure.
“We’re very excited to see how Flybits will change the way people develop for and interact with the mobile Internet,” said Flybits founder & chief product officer Dr. Hossein Rahnama. “Our product will accelerate companies’ understanding and adoption of context-aware mobile experiences, making them more widely available to consumers. Flybits lets companies focus on introducing innovative experiences and new business models, whether delivered through apps on smartphones and tablets or through wearables or browsers, rather than worry about the technical complexities of context-aware services. Tailored, customized, and predictive experiences are becoming an expectation of consumers across the spectrum and we look forward to helping companies stay ahead of that demand.”
Users can create their own Context Plug-Ins and Moments and both the company and the Flybits developer community will add more. The SDK enables integration with in-house systems that bring additional contextual information.
What do our computers need for ‘contextual’ Artificial Intelligence?
To truly see this area of computing flourish we need to give our machines a more complete set of data graphs that describe and detail how we act. These are sometimes broken down social, interest, behavioral and personal graphs. If we understand the ‘social graph’ (in the context of a social network like Facebook) to be a mathematical description of a user and the things, people and places they interact with, then the interest, behavioral and personal graphs will start to enable us to become digitized humans to a greater degree.
Yes there are big questions relating to privacy here (which is another story so let’s leave it) but, largely speaking, this technology is inevitable.
This article was written by Adrian Bridgwater from Forbes and was legally licensed through the NewsCred publisher network.