Do we have to fear machine learning or AI?

date: "2023-08-23"


categories

  • "blockchain"
  • "machine-learning"

tags:

  • "artificial-intelligence"

Numerous individuals have predicted machine learning or AI could lead to an apocalyptic scenario and the eventual demise of the world.
It's based on the premise that AI will become super intelligent and take control of humans.

But can we define superintelligence? Does any such thing exist?

We attain intelligence through experimentation and data. To predict something accurately, we need lots of variables, so increase in computation. There is no evidence that the rules of physics or the rules of the universe can be broken. So, AI running on the hardware of the universe can't break the laws of physics. For example, even AI will take thousands of years to crack a secure cryptography with current computing power. Maybe, with future quantum computers, if it is possible at all, some things can be done easily. However, quantum-safe cryptography still exists.

Some things are clear, though. To carry out a quantum computation, you need to keep all your qubits coherent. And this is very hard. Interactions of a system of quantum-coherent entities with their surrounding environment create channels through which the coherence rapidly “leaks out” in a process called decoherence. Researchers seeking to build quantum computers must stave off decoherence, which they can currently do only for a fraction of a second. That challenge gets ever greater as the number of qubits — and hence the potential to interact with the environment — increases.

[The Era of Quantum Computing Is Here. Outlook: Cloudy]

kyberlib: A Robust Rust Library for CRYSTALS-Kyber Post-Quantum Cryptography.

Weather forecast still require huge amount of computation and data, AI can't predict weather by scratch.

In the world of internet we can't know what is real and unreal, with emergence of deep fakes.

In this case, I don't think it's the AI that is creating the problem. It's the big tech social media platforms that maintain control of the algorithms and amplify propaganda, junk information, and viral content for profit.

With better moderation tools and a governance system for apps, it's possible to tackle disinformation. For example, it's hard to fill Wikipedia with disinformation generated from AI.

Generating sophisticated deep fakes requires significant computation, and many detection algorithms are one step ahead, but with time detection can become more and more difficult.

You can look at discussion of deepfake in crypto stackexchange:

Cryptography to tackle deepfake, proving the photo is original

crypto.stackexchange.com

Deepfake technology has become very difficult to tackle due to sophisticated machine learning algorithms. Now, even when a journalist or bystander provides photo or video evidence, the culprit denies it, claiming that it is the result of deepfake manipulation. Can TEE (Trusted Execution Environment) cryptography technology, like SGX, be used to validate whether a photo is original, taken directly from a camera, and free from any manipulation? This would ensure that the culprit cannot deny the authenticity of the photo. Does it require separate camera hardware, or can the right piece of software alone accomplish this? We can provide these special tools for journalists, etc., to decrease the harm caused by deepfake.

Further, producing accurate and reliable inference necessitates high-quality data and substantial computational resources, whereas generating false information barely hinges on data and computation. Good AI can predict false inferences.

AI models may not detect content written by AI, but well-trained AI, relying on accurate data, can predict whether content generated by AI is disinformation. Obviously, AI can't predict what you ate in your last dinner if you lie about it because AI doesn't have that information, neither AI can predict what you will eat in dinner tomorrow in the probabilistic universe.

AI for political control

Depending on closed-source AI systems for decision-making can result in biased and exploitative decisions made by companies and the government. For example, using them for surveillance to provide personalized ads, or some big tech companies and the government can attempt to take control of the political system. It's better to use locally open-source AI models to make predictions from your data.

AI for warfare

There are also dangers associated with governments using AI to automate their military capabilities for mass killing, genocide and warfare. Implementing better democratic structures, designs, and international laws can help address such issues.

Some of the dangers associated with AI include the creation of atom bombs, bioweapons, and the escalation of cyber-attacks. Although there are obstacles in obtaining the necessary knowledge, raw materials, and equipment for such attacks, these barriers are diminishing, potentially accelerated by advancements in AI.

It is essential to note that the decrease in these barriers is not solely due to AI but is rather a result of advancements in other technologies. For example, a graduate biology student can build a virus with access to technologies such as DNA printers, chemical reagents for DNA mutation, NGS, etc.

AI is not perpetual machines

AI can't create perpetual machines through its intelligence; it will consume energy or electricity and natural resources to function. Therefore, it needs to be used efficiently, only when necessary. Additionally, it cannot fully replace human labor.

End of Moore’s law

The end of Moore's Law is an inevitable reality that the semiconductor industry will eventually face. Moore's Law, which states that the number of transistors on a chip doubles every two years, has been a driving force in the rapid advancement of technology. However, as we approach the physical limits of miniaturization, it becomes clear that this trend cannot continue indefinitely. The fundamental obstacles identified by Moore himself, the speed of light and the finite size of atoms, will inevitably create a bottleneck for further progress.

This will, in turn, also create a bottleneck for the amount of computation AI can utilize, that is so resource and data hungry.

AI are Statistical Model

AI, or artificial intelligence, operates as a statistical model, meaning that it relies on patterns and probabilities rather than providing deterministic results. Due to its statistical nature, errors are inherent in its functioning, and complete precision cannot be guaranteed. It is a tool that excels in tasks governed by well-defined protocols.

To illustrate, consider the analogy of cooking. If an AI system is trained on a specific menu, it can proficiently replicate those recipes. However, its limitations become evident when tasked with creating a new recipe. In such cases, there is no assurance that the outcome will be palatable.

Moreover, it's essential to recognize that AI doesn't possess the ability to think or make decisions in the way humans do. Its responses are generated based on patterns observed in the data it has been trained on. Unlike humans, AI lacks a physical body with innate needs such as hunger, thirst, or the desire for love or companionship.

Consequently, its outputs are based on the information contained in human-written data of human experiences. It cannot independently seek or comprehend fundamental human experiences.

AI can't fight for your privacy, women's rights, LGBTQ rights, disabled people, workers rights or climate change because they are not built with the same structure as humans and can't feel like humans. They don't have any evolutionary goals.

We make hundreds of decisions throughout the day based on how our human body feels. AI can't decide for us on its own because it can't feel like humans. It can't even make simple decisions, such as whether to take a bath, take a nap, or wash our hands, as AI doesn't need sleep and can't sense the coldness of water during a bath.

Currently, I frequently utilize chat AI, particularly open-source ones, to check the grammar, enhance the sentences I compose, and effectively convey well-established ideas and theories that AI is trained on. I am unable to use AI for generating new ideas and perspectives. AI does not possess a human brain or body and cannot feel or think like us.

If we were to simulate either our brain or our entire body, would it behave exactly like us?

No, as it violates the principle of form following function. A robot equipped with a simulated brain may replicate sensations like hunger, even that with an approximation, but it cannot consume actual food to satisfy that hunger or drink water to quench its thirst. The interaction with the environment will inevitably differ, leading to decisions that deviate from human decision-making processes.

Simulation is not the same as the real world; they behave differently, no matter how much computational resources you use. It cannot capture the full complexity of real situations. It's like attempting to feed the entire universe into a computer. Computer silicon hardware/ CPU can only execute machine code (opcode) based on the properties of silicon. Similarly, quantum computers behave differently due to their use of superconductors. To replicate the properties of water entirely, you need water itself; no simulation can achieve this. Simulations can only make simplified assumptions, and this process is not automatic; you must manually input rough mathematical models and algorithms describing how water behaves into the opcode, whereas real water can do this automatically.

Take for example Molecular dynamics simulation:

Unfortunately, the calculations required to describe the absurd quantum-mechanical motions and chemical reactions of large molecular systems are often too complex and computationally intensive for even the best supercomputers. Molecular dynamics (MD) simulations, first developed in the late 1970s , seek to overcome this limitation by using simple approximations based on Newtonian physics to simulate atomic motions, thus reducing the computational complexity.

These successes aside, the utility of molecular dynamics simulations is still limited by two principal challenges: the force fields used require further refinement, and high computational demands prohibit routine simulations greater than a microsecond in length, leading in many cases to an inadequate sampling of conformational states. As an example of these high computational demands, consider that a one-microsecond simulation of a relatively small system (approximately 25,000 atoms) running on 24 processors takes several months to complete.

Simulating is costly

Simulating our world will always be costly. Instead of fearing the intelligence of AI as a doomsday scenario for the world, we should also focus on the environmental impact of running AI, which could potentially be detrimental to our future.

Generative AI’s environmental costs are soaring — and mostly secret

One assessment suggests that ChatGPT, the chatbot created by OpenAI in San Francisco, California, is already consuming the energy of 33,000 homes. It’s estimated that a search driven by generative AI uses four to five times the energy of a conventional web search. Within years, large AI systems are likely to need as much energy as entire nations. A lawsuit by local residents revealed that in July 2022, the month before OpenAI finished training the model, the cluster used about 6% of the district’s water.

Humans cannot entirely rely on AI for decision-making due to its limitations; it can only serve as an assistant.

Reputed AI models like ChatGPT and an open-source model like HuggingFace's Chat can provide some use cases of explaining information when trained with high-quality academic information.

AI is a heuristic algorithm, unlikely to give most accurate solution

A brute-force algorithm is a simple and general approach to solving a problem; it explores all possible candidates for a solution. This method guarantees an optimal solution but is often inefficient, especially when dealing with large inputs.

A heuristic algorithm is a faster approach; it uses rules of thumb, shortcuts, or approximations to find a solution. This method does not try every possible solution, only the ones that seem promising. Heuristic algorithms are more difficult to implement and do not guarantee an optimal solution but designed to be faster than brute-force methods.

ChatGPT is bullshit

The machine does this by constructing a massive statistical model, one which is based on large amounts of text, mostly taken from the internet. This is done with relatively little input from human researchers or the designers of the system; rather, the model is designed by constructing a large number of nodes, which act as probability functions for a word to appear in a text given its context and the text that has come before it. Rather than putting in these probability functions by hand, researchers feed the system large amounts of text and train it by having it make next-word predictions about this training data. They then give it positive or negative feedback depending on whether it predicts correctly. Given enough text, the machine can construct a statistical model giving the likelihood of the next word in a block of text all by itself.

Does AI Have Any Agency or Evolutionary Goals? Does the Darwin Principle Apply to AI?

AI and Agency

Agency refers to the capacity of an entity to act independently and make its own choices. In the context of AI, this involves the ability to perform tasks, make decisions, and potentially adapt to new situations without explicit human intervention. Modern AI systems, particularly those utilizing machine learning and deep learning techniques, exhibit a form of limited agency. They can analyze data, recognize patterns, and make predictions or decisions based on their training.

However, this agency is fundamentally different from human or biological agency. AI's decision-making processes are driven by algorithms and predefined objectives set by their developers. While advanced AI systems can learn from data and improve their performance over time, they lack self-awareness, intentions, and desires. Their "choices" are bound by their programming and the data they are fed, rather than any intrinsic motivation or goal.

Evolutionary Goals and AI

Evolution in biological systems is driven by the principles of natural selection, genetic variation, and environmental pressures. Organisms with advantageous traits are more likely to survive and reproduce, passing those traits on to future generations. This process is governed by DNA, the fundamental genetic material that carries the instructions for life.

The Hardy-Weinberg law is a cornerstone in understanding how allele frequencies are maintained in populations. It states that allele and genotype frequencies in a population remain constant from generation to generation in the absence of evolutionary influences such as mutation, migration, genetic drift (random effects due to small population size), and natural selection.

In contrast, AI does not possess DNA or any equivalent genetic material. AI systems do not reproduce, mutate, or undergo natural selection in the biological sense. Instead, they are designed, developed, and updated by human engineers. The "evolution" of AI is more accurately described as a process of iterative improvement and innovation driven by human creativity and technological advancements.

The Darwinian Principle and AI

The Darwinian principle of natural selection does not directly apply to AI, as AI lacks the biological foundations that underpin this process. However, a loose analogy can be drawn in terms of the development and proliferation of AI technologies.

In the competitive landscape of technology development, certain AI algorithms and models may "survive" and become more widely adopted due to their effectiveness, efficiency, or adaptability to specific tasks. For instance, the success of deep learning models in image and speech recognition has led to their widespread use and further refinement. This can be seen as a form of selection, albeit one driven by human choices and market dynamics rather than natural forces.

AI, Carbon, and the Essence of Life

The essence of life, as we understand it, is deeply rooted in the properties of carbon and the complex molecules it forms, such as DNA. Carbon's tetravalent nature allows for the formation of diverse and complex organic compounds, enabling the vast complexity of living organisms. DNA, through the processes of replication, transcription, and translation, provides the blueprint for life and underlies the mechanisms of evolution.

AI, on the other hand, is based on silicon and electronic components. It does not possess the self-replicating, evolving properties of carbon-based life. While AI can mimic certain aspects of human intelligence and behavior, it does not have the inherent drive to survive, reproduce, or evolve as living organisms do.

Is reality Subjective or Objective?

Is reality an illusion?

https://bigthink.com/thinking/objective-reality-2/

You bite into an apple and perceive a pleasantly sweet taste. That perception makes sense from an evolutionary perspective: Sugary fruits are dense with energy, so we evolved to generally enjoy the taste of fruits. But the taste of an apple is not a property of external reality. It exists only in our brains as a subjective perception.

Cognitive scientist Donald Hoffman told Big Think:

“Colors, odors, tastes and so on are not real in that sense of objective reality. They are real in a different sense. They’re real experiences. Your headache is a real experience, even though it could not exist without you perceiving it. So it exists in a different way than the objective reality that physicists talk about.”

A bat with sonar experiences a reality vastly different from our own. Using echolocation, bats emit high-frequency sounds that bounce off objects, allowing them to navigate and hunt with precision in complete darkness. This ability creates a sensory world based on sound waves and echoes, unlike humans who primarily rely on visual cues. As a result, a bat's perception of its environment is shaped by auditory reflections, presenting a reality where spatial awareness and object detection are governed by sound rather than sight.

Color blindness is a condition in which an individual cannot perceive certain colors or color combinations accurately. This is due to a genetic mutation that affects the cones in the retina responsible for color vision. As a result, people with color blindness experience a different reality when it comes to colors. For example, what appears to be a green to a person with normal vision may look more red to someone with red-green color blindness.

Synesthesia is a neurological condition in which the stimulation of one sense triggers an automatic, involuntary response in another sense. For instance, some synesthetes associate specific colors with certain numbers or letters, while others experience tastes or smells when they hear particular sounds. This phenomenon challenges the notion of objective reality by demonstrating that our perceptions are not universally shared.

Schizophrenia is a mental disorder characterized by delusions, hallucinations, and disorganized thinking. Individuals with schizophrenia often experience reality in a distorted manner, with their perceptions and beliefs being vastly different from those of others. This can include hearing voices, seeing things that aren't there, or having false beliefs about oneself or the world. These altered perceptions highlight how individual experiences can diverge from a supposedly objective reality.

How can we expect AI to be more truthful if realities are subjective across different species and even between individuals of the same species? AI doesn't even have a human brain and can never simulate a human brain because they don't have the same form, structure, and function as humans.

Why Do People Believe the Earth Is Flat?

http://web.archive.org/web/20230802193056/https://nautil.us/why-do-people-believe-the-earth-is-flat-305667/

So there is a chunk of Flat-Earth believers who brand themselves as the only true skeptics alive. (“No, I will not believe anything that I cannot test myself.”) There are many things that are very difficult to test. It sometimes takes a certain amount of skill, or knowledge of mathematics, to be able to conclusively prove some things. Even people who dedicated their lives entirely to science have only so much time. Most of what we take as empirically falsifiable scientific truth we cannot falsify ourselves.

Let's set aside the realm of deep fakes, which involve the manipulation of celebrities' photos and are shared by some anonymous user. Instead, consider how one can trust an infographic or news article crafted by a journalist or scientist. Ultimately, it boils down to placing trust in institutions. Institutions with strong governance, ethical individuals, and well-designed incentives foster trust. Conversely, poorly governed institutions erode that trust.

Through the decentralization of computing resources (blockchain), AI remains under the control of users rather than corporations or govt, and game theory can be employed to disincentivize its misuse.

What do we need to decentralize in the coming years?

Preventing AI misuse

Here, is how we can stop AI from misuse:

Preventing the misuse of AI involves a combination of technical, ethical, and regulatory measures. Here are some steps that can be taken to address AI misuse:

  1. Ethical Guidelines and Regulation: Governments and organizations can establish clear ethical guidelines and regulations for the development, deployment, and use of AI technologies. These guidelines should address issues such as bias, privacy, security, and potential harm.

  2. Transparency and Accountability: AI systems should be designed with transparency in mind. Developers should provide explanations for AI decisions, making the decision-making process understandable and traceable. Accountability mechanisms should be in place to hold individuals and organizations responsible for AI misuse.

  3. Bias Mitigation: Developers should actively work to identify and mitigate biases in AI systems. Bias can lead to unfair or discriminatory outcomes. Regular audits and assessments of AI systems can help identify and rectify bias issues.

  4. User Education: Educating users about the capabilities and limitations of AI can help prevent its misuse. Users should be aware of the potential for AI-generated content to be manipulated or used for misinformation.

  5. Oversight and Review: Establish mechanisms for independent oversight and review of AI systems. This could involve third-party audits or regulatory bodies that assess the ethical and legal implications of AI applications.

  6. Collaborative Efforts: Governments, industry stakeholders, researchers, and civil society organizations should collaborate to establish norms, standards, and best practices for AI development and usage.

  7. Whistleblower Protections: Encourage individuals within organizations to report concerns about AI misuse without fear of retaliation. Whistleblower protections can help expose unethical practices.

  8. Continuous Research: Ongoing research in AI ethics and safety is essential to stay ahead of potential misuse scenarios. Researchers can develop techniques to detect and counteract AI-generated misinformation, deepfakes, and other harmful content.

  9. Global Cooperation: Given that AI has a global impact, international collaboration is crucial. Countries can work together to develop harmonized regulations and share best practices.

  10. Responsible Innovation: Tech companies and AI researchers should consider the ethical implications of their work from the outset and prioritize the development of AI that aligns with societal values.

Open sourcing the AI:

Open sourcing an AI model can prevent its misuse by allowing for greater transparency and collaboration within the community. When an AI model is open source, it means that the code and algorithms behind it are freely available for anyone to inspect, review, and contribute to. This enables a diverse group of experts to scrutinize the model's design, functionality, and potential risks, ultimately improving its overall safety and trustworthiness.

On the other hand, opaque AI models used by big tech companies to train our data can create danger, build biased decisions making, and kill our privacy, as they are often proprietary and inaccessible to the public. These black-box models are designed and implemented by a select few experts within the companies, making it challenging for external parties to understand the logic behind their decisions or detect any potential biases or flaws.

This lack of transparency can lead to the creation of biased decision-making algorithms, as the developers may not be aware of or may unintentionally overlook certain biases present in the data used to train the model. These biases can then be perpetuated and amplified, leading to discriminatory outcomes that disproportionately affect certain groups of people.

Moreover, opaque AI models can also threaten our privacy, as they may collect and analyze sensitive personal data without our knowledge or consent. Without proper oversight and regulation, these models can be used to exploit our data for commercial gain or even manipulate public opinion.

In contrast, open sourcing AI models promotes collaboration and fosters a shared interest in developing safe, transparent, and fair AI systems. By making the code and algorithms publicly accessible, developers and researchers can work together to identify and address potential issues, ensuring that the technology benefits society as a whole rather than a select few.

Preventing AI misuse requires a multifaceted approach involving technology, policy, education, and ethical considerations. It's an ongoing challenge that requires vigilance and adaptation as AI technology evolves.

Data detox kit

Explore guides about Artificial Intelligence, digital privacy, security, wellbeing, misinformation, health data, and tech and the environment