Google recently announced a new, open-source machine learning platform called TensorFlow, that will accelerate the pace at which neural networks will integrate into giant’s service portfolio, and potentially its business model.
Google claims it can significantly improve both the speed and power of neural network development across a wide variety of platforms and applications by open sourcing the solution and empowering the community to develop the platform. Google’s hope is that TensorFlow could become Google’s new organizing principle, and a major driver in the continuing evolution of the Internet.
Google is calling it the company’s “second generation” machine learning platform and says that by using TensorFlow, its developers can build and train a machine learning algorithms “five times faster” than previously possible.
This will accelerate the artificial intelligence community’s efforts and signals a global change towards more powerful AI applications in all aspects of our lives.
In Google Open Sources TensorFlow we feature content taken from Google’s TensorFlow website and the BayLearn15 conference to act as a primer in understanding the technology. You can grab TensorFlow at the following places:
What is TensorFlow?
TensorFlow™ is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API. TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google’s Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well.
What is a Data Flow Graph?
Data flow graphs describe mathematical computation with a directed graph of nodes & edges. Nodes typically implement mathematical operations, but can also represent endpoints to feed in data, push out results, or read/write persistent variables. Edges describe the input/output relationships between nodes. These data edges carry dynamically-sized multidimensional data arrays, or tensors. The flow of tensors through the graph is where TensorFlow gets its name. Nodes are assigned to computational devices and execute asynchronously and in parallel once all the tensors on their incoming edges becomes available.
Why Did Google Open Source TensorFlow?
If TensorFlow is so great, why open source it rather than keep it proprietary? The answer is simpler than you might think: We believe that machine learning is a key ingredient to the innovative products and technologies of the future. Research in this area is global and growing fast, but lacks standard tools. By sharing what we believe to be one of the best machine learning toolboxes in the world, we hope to create an open standard for exchanging research ideas and putting machine learning in products. Google engineers really do use TensorFlow in user-facing products and services, and our research group intends to share TensorFlow implementations along side many of our research publications.
Large-Scale Deep Learning for Intelligent Computer Systems
Jeff Dean from Google talks about large-scale deep learning for intelligent computer systems at BayLearn – the Bay Area Machine Learning Symposium.
Nikolas Badminton is a world-respected futurist speaker that researches, speaks, and writes about the future of work, how technology is affecting the workplace, how workers are adapting, the sharing economy, and how the world is evolving. He appears at conferences in Canada, USA, UK, and Europe. Email him to book him for your radio, TV show, or conference.