BRAIN CELLS ON SILICON CHIPS : ADVANCEMENT OR DESTRUCTION?



If 2018 belonged to Blockchain then 2025 definitely belonged to AI. Popularised by OpenAI's ChatGPT, this field has grown at a breathtaking pace. Every day we get to see so many new AI agents, LLMs, AI tools and AI startups that it feels that we are moving too fast towards the goal of achieving the ultimate intelligence- human intelligence and beyond.

In fact Sam Altman, CEO of OpenAI, predicted that by 2028 we would have superintelligence - an intelligence which would be capable of outperforming human CEOs and research scientists. 

But how is that possible? How can AI surpass the human intelligence as, afterall, they are trained by the data we give?

To answer that, I have to introduce to a concept of "Synthetic Biological Intelligence (SBI)". It is concept of merging the lab-grown neurons (yeah, cells of brain) with silicon chips (no need to read again, you read it right).

As crazy as it sounds, it is actually a method to achieve the superintelligence. By using neurons, we will be able to "teach" AIs like the way we teach babies. In fact, we might also be able to induce emotions in AI! If you think about this, it feels like creating a human with no body :).

So, is this method just an idea? No! In fact, scientist have released these hybrid systems like Cortical Labs' CL1 which are highly energy efficient. Our brain uses only about 20 watts of power in comparison to the supercomputers which work on megawatts of power. Like I told you, AI is growing exponentially with most of us not knowing it.

All this sums to the final question- What it means for us? Honestly, we need to think about all the possibilities before answering this question. Brain on silicon chips is definitely one of the greatest advancements in field of AI but the fear of robots taking over increases proportionally. We all know that human race has dominated Earth just because of our intelligence. If another species exceeds our intelligence and strength, then we have to live in the fear of them taking over throughout our life.

So, what is the answer? I would suggest building superintelligence only if we find a way to keep it contained and always remain under human supervision. One way of doing this is by having a part of code for each AI system built which has a set of instructions which cannot be updated. So even if AI wants to change its purpose, it can't. But wait! Who would write the permanent code? It has to be a trustworthy human otherwise the existence of such code would be meaningless.

Hence, we need to look at all possibilities and outcomes for each step we take further. Until we come up with concrete solution for these problems, using the existing AI platforms seems the best bet.



Comments

Popular posts from this blog

UNIFIED QR CODE SYSTEM FOR A SEAMLESS HOTEL EXPERIENCE

MY FIRST-YEAR EXPERIENCE AT IIT-D AI HACKATHON