As far as I'm concerned. There are two types of AI.
- Artificial Intelligence
- Actual Intelligence
Which would you choose?
As far as I'm concerned. There are two types of AI.
Which would you choose?
I would choose both. AI couldn’t have happened without human intelligence.
The long list of benefits of AI has been well publicised now, and one of these benefits is that AI is helping to save lives in hospitals and in the community.
Like anything else, it’s about using it correctly to benefit the world.
I think I've watched to much Dr Who to feel comfortable with AI robots doing surgery on me, I think back to the cybermen from the David Tennant era of Dr Who.
AI is only as clever as the material it's given to learn from, there have already been instances of bias, racial bias in particular, there's probably a gender bias too.
Where does Ai leave those iof us who want to engage in ideas and expand our thinking? That's something not based around facts alone, but the interpretation of themm context is so important and I'm not sure AI can know the contexts of exploratory thinking, is it only "thinking" inside it's box, can it make meaningful connections across subjects?
But people have all the same biases. They learn from the same sources people do
There is no reason it can't make the same, and most likely better, connections, as it can draw from a bigger pool of knowledge. Although this is more applicable to technical topics.
The challenge is when they learn from each other.
I think they probably have issues with spiritual, ethical or emotional topics as they are unlikely to be able 'feel' anything. This means they are all potential psychopaths.
Any empathy would be insincere.
I am human so how can I not think in human terms?
Thanks for the link to the conversation website, I will try it.
But research has shown AI isn't any less biased, it amplifies the biases of the information it's given. Is the entire internet factual, unbiased and nuanced? I don't think so.
According to Wolfgang Messner on “The Conversation” website, AI is not capable of thinking. His argument persuades me that no amount of facts and information that is fed into tools such as ChatGPT and Gemini will ever be able to think.
Despite the name, AI doesn’t actually think.
Tools such as Gemini process massive volumes of human-created content, often scraped from the internet without context or permission. Their outputs are statistical predictions of what word or pixel is likely to follow based on patterns in data they’ve processed.
They are, in essence, mirrors that reflect collective human creative output back to users – rearranged and recombined, but fundamentally derivative.
And this, in many ways, is precisely why they work so well.
Messner addresses some aspects that TheCatWoman talked about.
How can the irreplaceable value of human creativity be preserved amid this flood of synthetic content?
The historical parallel with industrialization offers both caution and hope. Mechanization displaced many workers but also gave rise to new forms of labor, education and prosperity. Similarly, while AI systems may automate some cognitive tasks, they may also open up new intellectual frontiers simulating intellectual abilities. In doing so, they may take on creative responsibilities, such as inventing novel processes or developing criteria to evaluate their own outputs.
This transformation is only at its early stages. Each new generation of AI models will produce output that once seemed like the purview of science fiction. The responsibility lies with professionals, educators and policymakers to shape this cognitive revolution with intention.
Will it lead to intellectual flourishing or dependency? To a renaissance of human creativity or its gradual obsolescence?
The answer, for now, is up in the air.
I think you are thinking in human terms, i.e. you can only read a small number of books, or know a small subset of information. But the AIs are being trained on millions of books and are reading the entire internet in certain areas. It is a huge difference in scale. They are likely less biased, or at least more aware of the bias, than most people.
There are differences on the scale of some of the AIs too. There are small ones and big ones. The big ones have much larger models based on far more data.
But their main benefit is in technical areas, looking for what portion of a gene affects certain diseases, sorting through data rapidly to find new drugs, summarising topics or books, giving overviews, etc.
Its understanding of the meaning of words or concepts is derived from a probabilistic model based on the context in which it is most used. These are the large language models. They do know about different languages
There are concepts and ways of thinking that are better aligned to certain languages. What happens when they swap between them I am not sure, but it is no different to the people that do it, as that what it learnt from.
I think you are thinking in human terms, i.e. you can only read a small number of books, or know a small subset of information. But the AIs are being trained on millions of books and are reading the entire internet in certain areas. It is a huge difference in scale. They are likely less biased, or at least more aware of the bias, than most people.
There are differences on the scale of some of the AIs too. There are small ones and big ones. The big ones have much larger models based on far more data.
But their main benefit is in technical areas, looking for what portion of a gene affects certain diseases, sorting through data rapidly to find new drugs, summarising topics or books, giving overviews, etc.
Its understanding of the meaning of words or concepts is derived from a probabilistic model based on the context in which it is most used. These are the large language models. They do know about different languages
There are concepts and ways of thinking that are better aligned to certain languages. What happens when they swap between them I am not sure, but it is no different to the people that do it, as that what it learnt from.
I am human so how can I not think in human terms?
Thanks for the link to the conversation website, I will try it.
But research has shown AI isn't any less biased, it amplifies the biases of the information it's given. Is the entire internet factual, unbiased and nuanced? I don't think so.
According to Wolfgang Messner on “The Conversation” website, AI is not capable of thinking. His argument persuades me that no amount of facts and information that is fed into tools such as ChatGPT and Gemini will ever be able to think.
Despite the name, AI doesn’t actually think.
Tools such as Gemini process massive volumes of human-created content, often scraped from the internet without context or permission. Their outputs are statistical predictions of what word or pixel is likely to follow based on patterns in data they’ve processed.
They are, in essence, mirrors that reflect collective human creative output back to users – rearranged and recombined, but fundamentally derivative.
And this, in many ways, is precisely why they work so well.
Messner addresses some aspects that TheCatWoman talked about.
How can the irreplaceable value of human creativity be preserved amid this flood of synthetic content?
The historical parallel with industrialization offers both caution and hope. Mechanization displaced many workers but also gave rise to new forms of labor, education and prosperity. Similarly, while AI systems may automate some cognitive tasks, they may also open up new intellectual frontiers simulating intellectual abilities. In doing so, they may take on creative responsibilities, such as inventing novel processes or developing criteria to evaluate their own outputs.
This transformation is only at its early stages. Each new generation of AI models will produce output that once seemed like the purview of science fiction. The responsibility lies with professionals, educators and policymakers to shape this cognitive revolution with intention.
Will it lead to intellectual flourishing or dependency? To a renaissance of human creativity or its gradual obsolescence?
The answer, for now, is up in the air.