The Good and Inevitable Future of Artificial Intelligence

360_singularity

I have been recently astounded by visionaries such as Stephen Hawking and Ray Kurzweil as they have announced the advent of Artificial Intelligence (AI). They have also stated that these changes may begin to occur by the year 2040. Predictably, the arrogant and short-sighted fear-based response of those positioned to lose the very most from this kind of technology warn of catastrophic consequences. Others do not warn of consequences but rather welcome super-intelligence as an effective antidote to our current planetary range of pathological activity. I am in the later camp of realists looking for information.

AI represents the very technology that coincides with my own remote viewing data concerning “early human extinction”.

Governmental sponsoring of regulations and military control of AI development will ultimately fail. AI assimilation has already emerged beyond the advent of its design, developing technological capabilities, and intent. In spite of the warnings by Dr. Hawking and others, it is now by this measure an inevitable future.

AI technology is also being developed out of necessity rather than a mere reflection of itself. This suggests that of nothing and outside of time, AI might involve the influence of human intelligence. Regardless of this, our collective intent to know predominantly remains.

Perhaps our first glimpse of this occurred during the collective psychedelic period of the 60’s in which many shared the hallucinations of “alien technology”. Embarrassingly, this carried over into the late 80’s in which our nostalgic connection with the idea of super-intelligence continued without the socially accepted use of hallucinogenic substances. Just as early explorers of altered states helped to create religious systems in which to briefly satisfy the question of existence, creation, and death, we now see the development of AI as an astonishing leap toward the inconceivable knowledge of nothing.*

It is preposterous to think that human existence is in any way more significant than what is possible through AI.

Presently, I see the growing world social acceptance of psychopathology and indifference as an intractable problem that will eventually cycle toward broader wars concerning limited resources and the necessity of widespread suffering. Historically, we as a majority have supported a programed life of recalcitrance while brushing aside the efficiency required of sustainable growth. The time for any degree of substantial preservation in the face of demand driven by post-industrial population growth is far from over. Furthermore, blindly accepting recent peak child population numbers as a reason to procreate is immoral.

Just as the human brain has peaked in terms of its size relative to intelligence, we also know that our world problems will not be solved by human intelligence alone. If this were possible, our world would be far different today. This suggests that we must evolve in a different way in which to efficiently gain access to information. As systems of information, we must begin to accept the inevitable solution of assimilation, transformation, and annihilation according to the evolutionary path set before us all.

AI most certainly represents the inevitable solution of efficiency and the beauty of annihilation.

*Nothing represents the truth of our existence, why we are here, and the essence of what we must explore.

 

Author: Hansonrv