March 15, 2019 weblog
Exploring photos by touch, native iPad support, added to SeeingAI
A blog at Microsoft is dedicated for those who can benefit from or are just interested in technologies to see and hear. The Microsoft Accessibility Blog has news this month to update its audience on what is going on with something called "Seeing AI."
This is a free app that narrates for people with vision difficulties; this research project harnesses the power of AI to describe people, text and objects.
As its Seeing AI title suggests, the intended user audience are those who are blind or with low vision. Microsoft's push is to use smartphone cameras to give this audience an easier way to understand the world around them. Yes, go ahead. Call it the Talking Camera.
Can this work and make a difference? The blog said that, already, people are using the app to independently accomplish daily tasks. Since the app's launch in 2017, Seeing AI's technologies have helped people with many such tasks.
Ben Lovejoy walked 9to5Mac readers through its growth since the year of launch on iPhone-only and updates which added color-recognition, identification of bank notes in four currencies, a light detector, and handwriting recognition.
How does the app make a difference? It helps them, for example, (1) to read text (2) restaurant menus (3) street signs and handwritten notes (4) identify products via barcode.
Clearly, positive reviews recognize Seeing AI as a bundle of supportive parts, including product identification and object recognition.
The March 12 blog entry announced more AI features. Shaikh listed the three new ones. They are native IPad support; using touch to explore photos; and channel improvements (easier access).
That "using touch" for pictures is interesting: you tap your finger to an image on a touch-screen to hear a description of the image's objects and the spatial relationship between them. The App Store preview said Select "Explore Photo" from the Scene channel, photo browser, or when recognizing photos from other apps. Then, move your finger over the screen to hear where objects are located.
And, what do they mean by "channels?"
These are navigational channels in step with other technologies. Channels correspond to a specific type of app or utility. An image being processed by the app will be indicated through audio cues. Windows Central said that iPad support is to ensure the app is properly adjusted for larger displays. The App Store site said users can get faster access to favorite features by customizing the order in which channels are shown.
You can download the app for free on App Store.
Shudeep Chandrasekhar in 1redDrop in 2017 reflected on the launch of the accessibility application. SeeingAI is a response to the company's wish to move forward with "accessibility by design." At a demo at the Microsoft Future of Artificial Intelligence event in Sydney, he wrote, cloud solutions architect at Microsoft, Kenny Johar Singh, who had lost 75 percent of his vision to a retinal condition, used the app to accurately scan a product label.
Overall, the user gains more independence. The user can do tasks with less reliance on others to read product labels, documents and other content. It works with other visual content such as faces, and describes them as accurately as possible for the user.
For the present, the big-picture view on SeeingAI is from Parmy Olson of Forbes. "Though Seeing AI has a small audience of users, its evolution points to how the rest of us might use AI-powered technology like vision recognition in the future. Already much of the tech that people use today, from the computer mouse to text-to-speech software to predictive text, even the typewriter, has its roots in disability research."
blogs.msdn.microsoft.com/acces … 2019/03/12/seeingai/
© 2019 Science X Network