Artificial Intelligence Bulletin – The Rise of the Cyborgs
Each week on a Wednesday Nikolas Badminton, Futurist highlights the top stories from the past week relating to the incredible rise of artificial intelligence and its application in society, the workplace, in cities, and in our lives.
In Artificial Intelligence Bulletin – Rise of the Cyborgs we see Elon Musk talking transhumanism, Finance Departments being disrupted, seismic predictions, the future according to Hackaday, and NVidia’s incredible growth.
Elon Musk: Humans must become cyborgs to avoid AI domination
Tesla and SpaceX CEO Elon Musk says humans will have to merge with machines to avoid becoming irrelevant.
Speaking at the World Government Summit in Dubai, he explained that the human brain isn’t capable of keeping up with computers, which will start replacing people in certain fields of work.
“Over time I think we will probably see a closer merger of biological intelligence and digital intelligence,” said Musk, according to CNBC.
“It’s mostly about the bandwidth, the speed of the connection between your brain and the digital version of yourself – particularly output.”
He added that computers can communicate at “a trillion bits per second”, while humans can only manage around 10 bits per second while typing on a mobile device.
He also spoke of his fear of “deep AI”, artificial intelligence that’s “smarter than the smartest human on earth”, labelling it a “dangerous” situation.
Read more at The Indepenedent
Artificial Intelligence Is Becoming A Major Disruptive Force In Banks’ Finance Departments
A combination of elements including massive distributed computing power, the decreasing cost of data storage, and the rise of open source frameworks is helping to accelerate the application of artificial intelligence (AI). Our own research indicates that, by 2035, AI could double economic growth rates in 20 countries, and boost labor productivity by up to 40 percent. The increasing importance of AI has significant implications for financial institutions and particularly for those institutions’ own finance function. In short, AI has the potential to fundamentally transform banks’ finance function within the next decade – if not sooner.
Artificial intelligence is not one technology but rather a group of related technologies – including natural language processing (improving interactions between computers and human or “natural” languages); machine learning (computer programs that can “learn” when exposed to new data) and expert systems (software programmed to provide advice) – that help machines sense, comprehend and act in ways similar to the human brain. These technologies are behind innovations such as virtual agents (computer-generated, animated characters serving as online customer service representatives); identity analytics (solutions combining big data and advanced analytics to help manage user access and certification) and recommendation systems (algorithms helping match users and providers of goods and services) which have already transformed the ways in which companies look at the overall customer experience.
Read more at Forbes
Can Artificial Intelligence Predict Earthquakes?
Johnson and collaborator Chris Marone, a geophysicist at The Pennsylvania State University, have already run lab experiments using the school’s earthquake simulator. The simulator produces quakes randomly and generates data for an open-source machine-learning algorithm—and the system has achieved some surprising results. The researchers found the computer algorithm picked up on a reliable signal in acoustical data—“creaking and grinding” noises that continuously occur as the lab-simulated tectonic plates move over time. The algorithm revealed these noises change in a very specific way as the artificial tectonic system gets closer to a simulated earthquake—which means Johnson can look at this acoustical signal at any point in time, and put tight bounds on when a quake might strike.
For example, if an artificial quake was going to hit in 20 seconds, the researchers could analyze the signal to accurately predict the event to within a second. “Not only could the algorithm tell us when an event might take place within very fine time bounds—it actually told us about physics of the system that we were not paying attention to,” Johnson explains. “In retrospect it was obvious, but we had managed to overlook it for years because we were focused on the processed data.” In their lab experiments the team looked at the acoustic signals and predicted quake events retroactively. But Johnson says the forecasting should work in real time as well.
Of course natural temblors are far more complex than lab-generated ones, so what works in the lab may not hold true in the real world. For instance,seismologists have not yet observed in natural seismic systems the creaking and grinding noises the algorithm detected throughout the lab simulations (although Johnson thinks the sounds may exist, and his team is looking into this). Unsurprisingly, many seismologists are skeptical that machine learning will provide a breakthrough—perhaps in part because they have been burned by so many failed past attempts. “It’s exciting research, and I think we’ll learn a lot of physics from [Johnson’s] work, but there are a lot of problems in implementing this with real earthquakes,” Scholz says.
Read more at Scientific American
The Future of Artificial Intelligence
Artificial General Intelligence (AGI) is a modern goal many AI researchers are currently devoting their careers to in an effort to bridge that gap. While AGI wouldn’t necessarily possess any kind of consciousness, it would be able to handle any data-related task put before it. Of course, as humans, it’s in our nature to try to forecast the future, and that’s what we’ll be talking about in this article. What are some of our best guesses about what we can expect from AI in the future (near and far)? What possible ethical and practical concerns are there if a conscious AI were to be created? In this speculative future, should an AI have rights, or should it be feared?
The optimism among AI researchers about the future has changed over the years, and is strongly debated even among contemporary experts. Trevor Sands (introduced in the previous article as an AI researcher for Lockheed Martin, who stresses that his statements reflect his own opinions, and not necessarily those of his employer) has a guarded opinion. He puts it thusly:
Ever since AGI has existed as a concept, researchers (and optimists alike) have maintained that it’s ‘just around the corner’, a few decades away. Personally, I believe we will see AGI emerge within the next half-century, as hardware has caught up with theory, and more enterprises are seeing the potential in advances in AI. AGI is the natural conclusion of ongoing efforts in researching AI.
Read more at Hackaday
Nvidia Beats Earnings Estimates As Its Artificial Intelligence Business Keeps On Booming
Nvidia continued to see demand for its graphics processors in the emerging world of artificial intelligence in its fourth quarter earnings reported Thursday.
In its fourth quarter earnings release, the Santa Clara, Calif.-based company reported revenue of $2.17 billion, up 55% year over year, on earnings per share of $1.13, up 117% a year ago. Wall Street analysts estimated $2.11 billion in revenue on EPS of 83 cents.
Traditionally, the company’s processors have been mostly used to power the latest gaming graphics, but the chips have become popular to run AI software in the data center and autonomous vehicles. A specific branch of AI, called deep learning, is where Nvidia’s processors particularly shine. These new growth areas have resulted in Nvidia’s stock blowing up 360% over the past 12 months.
Deep learning on Nvidia GPUs [graphics processing units], a breakthrough approach to AI, is helping to tackle challenges such as self-driving cars, early cancer detection and weather prediction,” said Nvidia cofounder and CEO Jen-Hsun Huang in a statement. “We can now see that GPU-based deep learning will revolutionize major industries, from consumer internet and transportation to health care and manufacturing. The era of AI is upon us.”
Read more at Forbes