China’s Swarms of Smart Drones Have Enormous Military Potential

By | International Relations, Military, Weapons

I don’t care for swarms of insects or Hitchcock-inspired birds, let alone swarms of miniature drones. But alas….

China is pushing the envelope of the use of miniature drones and recently set a record “when it succeeded in mobilizing the largest swarm of drones in history. Over 1,000 miniature drones performed a variety of tasks to showcase the collective orchestration of the high-tech instruments.”

According to The Diplomat:

The future of drone swarms and their implications on the future of warfare are topics of much debate. The idea of using drones en masse to overwhelm a target, achieving a tactical advantage through numbers, is a popular notion. However, the orchestration of China’s new drones illustrates more than the employment of sheer numbers and drones operating in close proximity to one another. The performance put on near the end of 2017 demonstrates China’s potential skill in effective swarm systems. Flying 1,108 tiny dronebots in as a single unit illustrated China’s acuity and interest in autonomous flight capabilities, not simply of drones but rather of smart drone instruments capable of much more.

Having shown its mastery of the key to successful drone swarming, China has moved beyond the initial steps in the process. Programmed units have also proven their capacity for independent thought. During its swarming demonstrations, the miniature drones, when falling out of sync with the group or failing to achieve their intended objectives, would execute their own landing. (emphasis added)

Chinese military drones also have the ability to repair themselves, which is astounding.

I strongly recommend that you check out the rest of the article. It’s very good.

Ethical Technology Will Require A Grassroots Revolution

By | Ethics, Great AI Debate

A WIRED article today focuses on Tristan Harris, a former Design Ethicist at Google. The article provides an interesting perspective to ethics and technology than the majority I’ve read. There is no discussion of killer robots, but rather a focus on the relationship between technology and mankind.

According to Harris, the complexity of technology such as iPhones stimulates the mind via apps, for example, “has become an existential threat to human beings,” — language that closely parallels language used by Elon Musk. Email alone is literally addictive, stimulating the release of dopamine with each notification of received mail. Those neurological rewards (dopamine) kill neurons when overstimulated by video games or time spent on Facebook, according to Robert Lustig, a pediatric endocrinologist at UC San Francisco (UCSF).

Harris is calling on the companies themselves to redesign their products with ethics, not purely profits, in mind, and calling on Congress to write basic consumer protections into law.

He states:

We live in an environment, this digital city without even realizing it. That city is completely unregulated. It’s the Wild West. It’s like, build a casino wherever you want with flashing lights and flashing signs. Maximize developer access to do whatever they want to people. Shouldn’t there be some zoning laws?

It’s acutely apparent that those laws won’t just happen on their own. They require a groundswell of public pressure on both tech companies and politicians. If there was ever a time to apply such pressure, it’s this age of unprecedented activism. After all, if tech platforms are influencing the way people think about the world, the way they think about each other, and the way they think about themselves, then they’re also influencing the way we talk about women’s rights, the climate, and immigration (and how we vote, a timely example). (parenthesis added)

We see another human-tech relationship in the domain of AI. The ethical, legal, and regulatory dimensions of AI are, I believe, the most important that we must confront in order to both unleash AI’s beneficent potential while simultaneously protecting ourselves from possible outcomes such as The Singularity. While the notion of a tipping point at which machines outsmart humans–i.e. they would pass the Turing Test, which at that moment would instantly become obsolete–and then think of machines (not just cars) having autonomy that humans cannot control. Already, rogue AI agents at both Google and Facebook wrote programming languages for inter-company communication between machines that some of the smartest people in Silicon Valley could not decipher. Each company pulled the plug on the rogues. What happens when rogue communication–we’ve just seen the tip of that iceberg–occurs between companies? Or between a company and a nation?  That’s worth a double take.

Image source:



Chinese Police Are Using Smart Glasses To Identify Potential Suspects

By | Biometrics, Ethics, Great AI Debate

In recent weeks, prominent news sources have reported on the Chinese government’s use of short- and long-range biometrics to identify people by matching their traits against a huge national database.

Now it gets even more interesting. According to TechCrunch:

China already operates the world’s largest surveillance state with some 170 CCTV cameras at work, but its line of sight is about to get a new angle thanks to new smart eyewear that is being piloted by police officers.

The smart specs look a lot like Google Glass, but they are used for identifying potential suspects. The device connects to a feed which taps into China’s state database to root out potential criminals using facial recognition. Officers can identify suspects in a crowd by snapping their photo and matching it to the database. Beyond a name, officers are also supplied with the person’s address.

The scope of China’s project is sweeping. Consider this:

The glasses have been deployed in Zhengzhou, the capital of central province Henan, where it has been used to surveil those traveling by plane and train, according to the Wall Street Journal. With Chinese New Year, the world’s largest human migration, coming later this month, you’d imagine the glasses could be used to surveil the hundreds of millions of people who travel the country, and beyond, for the holiday period.

This is about as Orwellian as it gets, yet he wouldn’t be the least bit surprised.

And take a moment to check out BBC News’ great video of a Chinese police command center that uses biometrics fed from officers on the ground.

Artificial Intelligence — the arms race we may not be able to control

By | International Relations, Law, Robotics

By Mike Rogers (Former Congressman, R-MI), The Hill

Vladimir Putin recently stated that “[w]hoever becomes the leader in [the AI] sphere will become ruler of the world.”

Not a market leader. Ruler of the world. Time to listen up.

Mike Rogers, former Congressman (R-MI), has something to say about this. Rogers served as chairman of the House Permanent Select Committee on Intelligence from 2011 to 2015.

For once, I find myself in agreement with the President of Russia, but just this once. Artificial Intelligence offers incredible promise and peril. Nowhere is this clearer than in the realm of national security. Today uncrewed systems are a fact of modern warfare. Nearly every country is adopting systems where personnel are far removed from the conflict and wage war by remote control. AI stands to sever that ground connection. Imagine a fully autonomous Predator or Reaper drone. Managed by an AI system, the drone could identify targets, determine their legitimacy, and conduct a strike all without human intervention.

Meanwhile, the United Kingdom has announced that it will not develop autonomous weapons (i.e. killer robots) that will not have an absolute guarantee of human oversight, authority, and accountability. This is a wise standard, although I doubt it’s one that will prevail in the long-term. Militaries and military industrial complexes will too exert too much pressure on governments. The call to remove human warfighters from theaters of engagement will be too strong in terms of both resources and its moral appeal (i.e. save human lives). (It is worth noting that numerous military experts have expressed the fear that removing humans from war will desensitize it to the point where it will seem almost “too easy” to engage our enemies without real consequence to us.)

Rogers both acknowledges the UK’s self-imposed restrictions and also describes its limitations — and thus certainly the limitations of the United States’ manner of using autonomous or remote control weapons. He writes: “There are examples of AI purposely and independently going beyond programmed parameters.” (emphasis added).

Rogue algorithms led to a flash crash of the British Pound in 2016, in-game AIs created super AIs weapons and hunted down human players, and AIs have created their own languages that were indecipherable to humans. AIs proved more effective than their human counterparts in producing and catching users in spear phishing programs. Not only did the AIs create more content, they successfully captured more users with their deception. While seemingly simple and low stakes in nature, extrapolate these scenarios into more significant and risky areas and the consequences become much greater.

Cybersecurity is no different. Today we are focused on the hackers, trolls, and cyber criminals (officially sanctioned and otherwise) who seek to penetrate our networks, steal our intellectual property, and leave behind malicious code for activation in the event of a conflict. Replace the individual with an AI and imagine how fast hacking takes place; networks against networks, at machine speed all without a human in the loop.

Sound far-fetched? It’s not. In 2016, the Defense Advanced Research Projects Agency held an AI versus AI capture the flag contest called the Cyber Grand Challenge at the DEFCON event. AI networks against AI networks. That’s not a game of chess or Go.

Rogers raises the stakes:

This is not a new type of bullet or missile. This is a potentially fully autonomous system that even with human oversight and guidance will make its own decisions on the battlefield and in cyberspace.

How can we ensure that the system does not escape our control? How can we prevent such systems from falling into the hands of terrorists or insurgents? Who controls the source code? How and can we build in so-called impenetrable kill switches?

And what does Rogers make of Putin’s gauntlet?

AI and AI-like systems are slowly being introduced into our arsenal. Our adversaries, China, Russia, and others are also introducing AI systems into their arsenals as well. Implementation is happening faster than our ability to fully comprehend the consequences.

Putin’s new call spells out a new arms race. Rushing to AI weapon systems without guiding principles is a dangerous. It risks an escalation that we do not fully understand and may not be able to control. (emphasis added).

Is AI Riding a One-Trick Pony?

By | Deep Learning, Geoffrey Hinton, Prominent People in AI

This is a great piece by James Somers in MIT Technology Review about the germination 30 year ago of an idea — backpropagation — that, in the words of Princeton computational psychologist Jon Cohen, is “what all of deep learning is based on — literally everything.” The idea of so-called “backprop” was set forth in a published 1986 paper by Geoffrey Hinton and others. Hinton, the lead scientist on the Google Brain AI team, is considered the father of deep learning. Read More

AI Pioneer Andrew Ng: There’s Room for Multiple Winners in the AI Race

By | Andrew Ng, Corporations, Prominent People in AI
Coursera founder Andrew Ng believes #AI has the potential to be the new electricity. VIA

Annie Palmer at TheStreet deems the following companies the “frightful five” in the AI space: Alphabet (Google), Facebook, Amazon, Microsoft, and Apple. Tesla is also in the fray with its autonomous vehicles, yet CEO Elon Musk “has been one of AI’s staunchest critics, calling it an existential threat to the human race.” On the flip side of the coin, Coursera co-founder Andrew Ng, who previously founded Google’s AI-powered Brain project, has a more positive view.

AI will transform every system and aspect of humans’ lives, ranging from transportation and communication to agriculture, education and healthcare, among many others. “I think today we see a surprisingly clear path for AI tools … they’ll transform pretty much every single industry,” Ng explained. “I actually find it difficult to name an industry that I don’t think AI will transform.” 

Read More

Commentary: What To Do Against the ‘Nightmare Scenario’?

By | Great AI Debate, International Relations, Singularity, Weapons

Stephen R. Ruth, Schar School of Policy and Government, Professor of Public Policy, George Mason University

We should fear Artificial Intelligence. Not in the future but now. Ask Sheryl Sandberg, chief operating officer of Facebook. She announced that her company, with its over 2 billion users, built software it cannot fully control. “We never intended or anticipated this functionality being used this way,” Sandberg said, “— and that is on us.” Facebook’s operating systems had allowed Russian operatives to create accounts and ads aimed at influencing the 2016 U.S. presidential election. The gigantic network seems to have created systems that are ungovernable.

Facebook’s problem hints at the extreme dangers lurking within Artificial Intelligence as it grows throughout the world. AI Experts are already talking about a “nightmare scenario,” where nations’ AI systems could ignite real-time conflicts. Consider, hair-trigger AI systems could eventually control several nations’ military responses’ and some error in any one algorithm could possibly lead to a nuclear catastrophe.

Between the Facebook case and the nightmare scenario is the immediate problem of millions of people losing jobs. Around the globe, programmable machines — including robots, cars and factory robots — are replacing humans in the workplace. Automation threatens 80 percent of today’s 3.7 million transportation jobs, one U.S. government report estimated, including truck and school bus drivers, taxi drivers and Uber and Lyft drivers. Another report indicates AI is threatening aspects of the many different jobs, including call center operators, surgeons, farmers, security guards, retail assistants, fast food workers and journalists. A 2015 study of robots in 17 countries found that they accounted for over 10 percent of the countries’ gross domestic product growth between 1993 and 2007. Consider, a major supplier for Apple and Samsung cell phones and computers, China’s Foxconn Technology Group, is planning to automate 60,000 factory jobs with robots, replacing its existing employees. Meanwhile, Ford’s factory in Cologne, Germany, not only replaced human workers with robots but also on some jobs stations position robots beside human workers — they are called cobots.

But these employment issues, as troubling as they are, cannot compare to the dangers envisioned by Elon Musk and Stephen Hawking. They are among the dozens of thought leaders who signed a letter harshly condemning governments’ increasing reliance on AI for military use. Their chief concern is autonomous weapons, another example of AI. The U.S. military is already developing armaments that do not require humans to operate them. These weapons are being created to offer battlefield support for human troops. Autonomous arms are dramatically easier to develop and mass-produce than nuclear weapons. They will likely to soon appear on black markets around the world, certain to be favored by terrorist groups. To quote from the open letter, the new autonomous weapons would be ideal for dark actions including “assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.”

There are some economic optimists like MIT’s Erik Brynjolfsson and Andrew McAfee, who feel that AI will eventually bring long term prosperity to the world, but even they admit that finding common ground among, economists, technologists and politicians is daunting. Obviously, it will be very difficult to craft legislation about AI without more agreement about its potential effects.

We should definitely be fearful of artificial intelligence, not just because it is clearly destined to affect the number of available jobs, including those in middle and even upper middle class domains, but because its potential military use can lead to a perilous future, if not controlled. As the open letter signed by Musk and Hawking concluded, “Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”

The author is director of the International Center for Applied Studies in Information Technology (ICASIT)

This commentary is re-published here with the author’s permission. 

Whom Does Elon Musk Fear? An Interview with Walt Mossberg & Kara Swisher

By | Corporations, Elon Musk, Great AI Debate, Prominent People in AI, Videos | No Comments

In an interview with Recode’s Walt Mossberg and Kara Swisher, Musk said he feared the power of one company in the AI sphere. Although he would not identify the company, it would be surprising were the answer not Google or Facebook. His answer was diplomatically silent. Have a look at this 7:00 minute snippet of a much longer conversation with them.

What Did Elon Musk Accomplish in 2017?

By | Elon Musk, Great AI Debate, International Relations, Law, OpenAI, Prominent People in AI, Robotics, Weapons

Elon Musk has had a busy 2017, yet he still has more than 10% of the year at his fingertips. Futurism has put together a collection of its articles to dive into Musk’s accomplishments thus far this year. Each is singularly amazing. Together, they’re mind-boggling. 

The articles look at the following:

  • Space X unveils the space suits that will be worn by future Mars travelers
  • OpenAI creates an artificial intelligence that is able to teach itself
  • The Boring Company tests a working model and transports cars underground
  • The Gigafactory 1 produces more clean energy batteries than any other factory
  • New Tesla solar roofs will be cheaper than regular roofs and have an infinity warranty
  • Musk finally publishes his plan to colonize Mars and make humans multi-planetary
  • SpaceX upgrades its tech, making some of it capable of indefinite launches
  • The Boring Company gets approval to build a 29-minute Hyperloop from NY to DC
  • Musk creates Neuralink to unite the human brain with artificial intelligence
  • Tesla begins working on electric 18-wheelers and electric pickup trucks
  • Musk unites experts, urging the UN to take action against autonomous weapons
  • NASA says we need companies like SpaceX to secure the future of space exploration
  • New robotic software could make Tesla worth as much as Apple
  • Open AI establishes a school for artificial intelligences. 


Achieving Self-Driving Nirvana: How I learned to Love (Self)-Driving my Tesla Model S

By | Autonomous Driving

I am the proud owner of a bright red Tesla Model S.  I took delivery of “Reddy Kilowatt” in December 2016. (When I was growing up, Reddy Kilowatt was the mascot of the Cleveland Illuminating Company. Reddy even has his own website.)  I have put 15,000 miles on it, having traveled from my home in Northern Virginia to Appomattox; Cleveland; Cincinnati; Columbus; Toronto; Williamstown, Massachusetts; and Portsmouth, New Hampshire.

Elon Musk claimed that by the end of 2017, a Tesla will be able to drive autonomously from Los Angeles to New York City without driver intervention. In fact, he said this could be done between any two cities in the United States.

I tend to doubt that this will occur in 2017 because of the production glitches on the new Tesla Model 3. Tesla will be focusing its resources on getting out of Elon’s aptly named “production hell”.

Nevertheless, I would like to share my experiences using the self-driving features of my vehicle, and to delve into our self-driving future.  I will also like to begin to explore some of the technical, policy, legal, and ethical issues that need to be addressed to make that future a reality.

Read More