Pentagon Wants Silicon Valley’s Help on A.I.


Business News - Opportunities - Reviews



But those relations have soured in recent years — at least with the rank and file of some better-known companies. In 2013, documents leaked by the former defense contractor Edward J. Snowden revealed the breadth of spying on Americans by intelligence services, including monitoring the users of several large internet companies.


Robert O. Work, right, at a 2014 news conference led by Chuck Hagel, the defense secretary at the time. Mr. Work, who was the deputy secretary of defense, said of the global race for A.I. technology: “This is a Sputnik moment.” Credit Chip Somodevilla/Getty Images

Two years ago, that antagonism grew worse after the F.B.I. demanded that Apple create special software to help it gain access to a locked iPhone that had belonged to a gunman involved in a mass shooting in San Bernardino, Calif.

“In the wake of Edward Snowden, there has been a lot of concern over what it would mean for Silicon Valley companies to work with the national security community,” said Gregory Allen, an adjunct fellow with the Center for a New American Security. “These companies are — understandably — very cautious about these relationships.”

The Pentagon needs help on A.I. from Silicon Valley because that’s where the talent is. The tech industry’s biggest companies have been hoarding A.I. expertise, sometimes offering multimillion-dollar pay packages that the government could never hope to match.

Mr. Work was the driving force behind the creation of Project Maven, the Defense Department’s sweeping effort to embrace artificial intelligence. His new task force will include Terah Lyons, the executive director of the Partnership on AI, an industry group that includes many of Silicon Valley’s biggest companies.

Mr. Work will lead the 18-member task force with Andrew Moore, the dean of computer science at Carnegie Mellon University. Mr. Moore has warned that too much of the country’s computer science talent is going to work at America’s largest internet companies.

With tech companies gobbling up all that talent, who will train the next generation of A.I. experts? Who will lead government efforts?

“Even if the U.S. does have the best A.I. companies, it is not clear they are going to be involved in national security in a substantive way,” Mr. Allen said.


An Air Force team transporting missiles to be loaded onto drones at the Persian Gulf base in 2016. Credit John Moore/Getty Images

Google illustrates the challenges that big internet companies face in working more closely with the Pentagon. Google’s former executive chairman, Eric Schmidt, who is still a member of the board of directors of its parent company, Alphabet, also leads the Defense Innovation Board, a federal advisory committee that recommends closer collaboration with industry on A.I. technologies.

Last week, two news outlets revealed that the Defense Department had been working with Google in developing A.I. technology that can analyze aerial footage captured by flying drones. The effort was part of Project Maven, led by Mr. Work. Some employees were angered that the company was contributing to military work.

Google runs two of the best A.I. research labs in the world — Google Brain in California and DeepMind in London.

Top researchers inside both Google A.I. labs have expressed concern over the use of A.I. by the military. When Google acquired DeepMind, the company agreed to set up an internal board that would help ensure that the lab’s technology was used in an ethical way. And one of the lab’s founders, Demis Hassabis, has explicitly said its A.I. would not be used for military purposes.

Google acknowledged in a statement that the military use of A.I. “raises valid concerns” and said it was working on policies around the use of its so-called machine learning technologies.

Among A.I. researchers and other technologists, there is widespread fear that today’s machine learning techniques could put too much power in dangerous hands. A recent report from prominent labs and think tanks in both the United States and Britain detailed the risks, including issues with weapons and surveillance equipment.

Google said it was working with the Defense Department to build technology for “non-offensive uses only.” And Mr. Work said the government explored many technologies that did not involve “lethal force.” But it is unclear where Google and other top internet companies will draw the line.

“This is a conversation we have to have,” Mr. Work said.

Continue reading the main story


Business News - Opportunities - Reviews



Leave a Reply