Apple hit headlines in early December when it announced that it would begin publishing AI papers, a significant break from its standard position of extreme secrecy. The tech behemoth has now followed through on its promise and come out with a document titled ‘Learning from Simulated and Unsupervised Images through Adversarial Training.’
The AI paper’s lead author is Apple researcher Ashish Shrivastava, who wrote the paper alongside Oncel Tuzel, Wenda Wang, Tomas Pfister, Josh Susskind, and Russ Webb. The paper focuses on the area of computer vision, talking about a technique to better an algorithm’s skill at recognizing objects via synthetic images.
Machine learning researchers can either choose to utilize computer-generated or real-world images when training artificial intelligence. The former is considered more efficient since it’s already labeled and annotated. On the other hand, the latter requires a human touch since people will have to tag each and every thing a computer sees in a photo.
The main conundrum in using artificial images is that the neural network might not be able to translate its knowledge to actual snaps since they’re often not realistic enough. Apple’s solution for this problem is called Simulated + Unsupervised learning. The technique basically boosts the realism of generated snaps.
Apple’s method is in fact a modified offshoot of a new machine learning procedure called Generative Adversarial Networks. The system pits two competing neural networks against each other in order to output photorealistic images. The brand apparently hopes to push its S +U learning method beyond still images to moving video in the future.
Apple’s switch to openness when it comes to AI is a big step forward for the firm and may just help its recruiting efforts. Its earlier secrecy was widely criticized by the research community and prevented people from wanting to work for the company since they wouldn’t be able to publish their work or interact with others.