Aware of the importance of image labels (alternative text) for people who are blind or have low vision, Microsoft is attempting to create auto-generated alt text for web images.
Currently, screen readers are unable to describe an image that has no label or description. They simply read out “unlabeled graphic.” Microsoft is aiming to improve this experience. As program manager Travis Leithead explains, Microsoft is working on creating machine learning algorithms that will allow the screen reader to automatically process images without labels. The intention is for the algorithms to describe the image in words and any text that appears in the image.
Microsoft acknowledges that the algorithms are not perfect yet and the quality of the descriptions given varies depending on each image. While it is a work in progress, Leithead expressed that “having some description for an image is often better than no context at all.”
Although the feature is not yet finalized, it is available by enabling “Get image descriptions from Microsoft for screen readers” in edge://settings/accessibility, and using Narrator or another screen reader to browse the web. Microsoft is already developing ways to improve the feature. The team is optimistic that, by continuously testing and improving the algorithm, they will enhance the service.