After announcing its intention to publish artificial intelligence research at the start of the month, the usually secretive Apple has done so exactly. Its first public paper, submitted late last week, revolves around an algorithm learning to recognise images that are generated by a computer, as opposed to existing ones captured by a camera.
The first publicly available Apple paper, titled ‘Learning from Simulated and Unsupervised Images through Adversarial Training’, credits a team of six researches – Ashish Shrivastava, Tomas Pfister, Oncel Tuzel, Josh Susskind, Wenda Wang, and Russ Webb – in addition to the company, Apple Inc., itself.
Incidentally, the paper had been in submission since November 15, which points to the fact that Apple had considered opening up its restrictive policies before the actual announcement in December at the Neural Information Processing Systems (NIPS) conference in Barcelona.
The reason this is a significant step for Apple is because the company has, for years, prevented its staff from openly publishing their research for the larger community. This in turn has hindered its efforts to hire the best people in the field, who like to regularly interact with others but cannot owing to Apple’s fondness for secrecy.
Allowing its team of researchers to publish openly, and contribute to the wider academia should help earn Apple better marks from the AI community, and lure better researchers in the process. Until now, the only move for Apple had been to acquire other companies outright.
But with machine learning becoming more and more important in today’s age – the Google Assistant is a well-known example of its capabilities – Apple is starting to embrace one facet of the open world