Eventually, virtually everything will be available through Google. You will be able to get information on anything and everything you ever wanted to know. As technology increases, and more and more data is logged into computers, the amount of data indexed by Google will approach infinity.
In addition to all preexisting historical information, books, music, manuals, software, programs and everything else that people and machines have already produced, there is an exponentially increasing amount of new data being generated every single day. Millions of people around the planet are blogging their ideas and recording their lives in digital format. Thousands of different languages, millions of new documents produced every single day.
As time moves forward, technology will continue to make it easier and cheaper for people to digitally record and publish anything and everything. Individuals will become independent media broadcasters, sharing their lives virtually with anyone who is willing to pay attention. The incredible amount of data generated by millions of people living digitally will be available online and indexed, tracked, monitored, and used by Google to influence markets and make money.
In addition to all of this information, factor in the endless stream of automatically generated data. You know, the stuff cranked out by the machines that monitor user activity, traffic patterns, lab analyses, stock markets, weather systems, and so on. These “low-level” data are then analyzed, discussed, elaborated, and reported through various digital media. As this information is then received by other individuals, it becomes a part of their lives, and thus is written about, recorded, and made available to others. The same is true for data generated through television, movies, music, and any other form of digitally consumed media. On the blogosphere, this phenomenon is referred to as the “echo chamber”. This type of cumulative resonance decreases the signal to noise ratio and further inflates the amount of human-generated data.
So what do we have here? An unlimited and even redundant stream of data on everybody and everything. Information about everything you have ever done, every place you have ever been, everything you are doing currently, even everything you intend on doing in the future. Every person you have ever known, every purchase you have ever made, every rule you have ever broken. Mechanical processes, computer processes, natural processes. Earth, moon, space, and beyond. Everything happening around the globe in every home, factory, industry, market, and government. Everything on television, everything on the radio, internet. Every speech, verdict, sentence, birth and death. Everything.
Given that, return to Google for a moment. Google no doubt will be connected to every digit of this infinite collection of data. At any given time, any authorized person will have access to virtually unlimited knowledge about anyone or anything. Likewise, but on a greater scale, software will provide cumulative analyses of aggregate collections of data. Now, in addition to focusing in on a single person, event, or process, authorized users will in effect be able to “zoom out” and access information about entire groups of people. For example, imagine software that will track and deliver reports on entire populations of individuals, generating comprehensive statistical information on every recordable aspect of their existence — blogs, activities, families, entertainment, health, politics, religion, shopping — presented in such a way as to enable the end user to zoom in or out on any aspect or series of aspects to any degree desired.
Still with me? Now imagine an integrated collection of such aggregate analysis programs, each designed to harvest and process data for hundreds of thousands of different analytical dimensions. Statistical aggregations of perpetual data generated in real-time and integrated with every other aggregate program. For example, there will be software to integrate the weather data from thousands of stations and satellites around the globe. This data stream is then processed and made available to other aggregate systems, such as those used for flight, travel, and traffic analyses. These “meta-analytical” systems retain their granular integrity while serving a cumulative purpose by contributing to the “big picture.” Such meta systems continue up the pyramid structure — merging natural and mechanical processes, government and military events, biological and inorganic data — becoming further integrated at each level until, ultimately, the most comprehensive meta-systems converge into a single, “all knowing” source of information. With such a system, the earth’s data is accessible at any desired level, from any desired perspective, and with any level of granularity. Certainly not true omniscience, but as close to it as humans will ever get.
What will be the implications of this so-called “artificial omniscience”? It depends on who controls it. If you honestly think that those in power will be letting Joe Blow from Nowhere, Anytown, have access to this kind of power, you need to put down the crack pipe and check yourself. In case you hadn’t noticed by now, the ruling elite operate autonomously, using their money and power to influence governments and corporations to do their bidding. Thus, it is quite obvious that artificial omniscience will belong to the forces that rule the earth at the time of its manifestation. As the world plods along toward the inevitable “brave new” one-world governmental system, the stage is being set for the one who will sit, briefly, at the top of it all. Rest assured that the one who wants all of the power will also want all of the knowledge. On earth, during that time, this person will be artificially omnipotent, artificially omnipresent, and now, as we have seen here, artificially omniscient. Note the key word here: “artificially.”