My thought on AI

As far as I'm concerned. There are two types of AI.

  1. Artificial Intelligence
  2. Actual Intelligence

Which would you choose?

  • Both, there are good and bad points for the two, but i think artificial intelligence is going to take over, most jobs will disappear in the future at some point, a lot of subjects that students are studying and training for will become obsolete, it will have been a complete waste of time and effort! not right away, it will be more hybrid at the start, but as time progresses it will steadily take over, and with advancing robotics as well, they will mesh the two together, then jobs will start to become a thing of the past. it might be slower at first, but it will speed up over time as things advance. 

  • I am human so how can I not think in human terms?

    Thanks for the link to the conversation website, I will try it.

  • But research has shown AI isn't any less biased, it amplifies the biases of the information it's given. Is the entire internet factual, unbiased and nuanced? I don't think so.

  • According to Wolfgang Messner on “The Conversation” website, AI is not capable of thinking. His argument persuades me that no amount of facts and information that is fed into tools such as ChatGPT and Gemini will ever be able to think.

    Despite the name, AI doesn’t actually think.

    Tools such as Gemini process massive volumes of human-created content, often scraped from the internet without context or permission. Their outputs are statistical predictions of what word or pixel is likely to follow based on patterns in data they’ve processed.

    They are, in essence, mirrors that reflect collective human creative output back to users – rearranged and recombined, but fundamentally derivative.

    And this, in many ways, is precisely why they work so well.

    Messner addresses some aspects that   talked about.

    How can the irreplaceable value of human creativity be preserved amid this flood of synthetic content?

    The historical parallel with industrialization offers both caution and hope. Mechanization displaced many workers but also gave rise to new forms of labor, education and prosperity. Similarly, while AI systems may automate some cognitive tasks, they may also open up new intellectual frontiers simulating intellectual abilities. In doing so, they may take on creative responsibilities, such as inventing novel processes or developing criteria to evaluate their own outputs.

    This transformation is only at its early stages. Each new generation of AI models will produce output that once seemed like the purview of science fiction. The responsibility lies with professionals, educators and policymakers to shape this cognitive revolution with intention.

    Will it lead to intellectual flourishing or dependency? To a renaissance of human creativity or its gradual obsolescence?

    The answer, for now, is up in the air.


    theconversation.com/is-ai-sparking-a-cognitive-revolution-that-will-lead-to-mediocrity-and-conformity-256940

  • I just checked and some large language models have passed the Turing test.

    Type "can an ai pass the turing test" into Google.

  • I am hopeful that there will always be opportunities for people to engage in ideas and expand thinking as that is necessary to give humans meaning. I can’t envisage AI ever being able to think as humans think, but it is concerning that AI could lead to a world where we can’t tell truth from lies. 

    The government has tried to save money by cutting funding to universities, and humanities departments have borne the brunt of the cuts.  I expect this is because they believe STEM subjects are more worthwhile than the arts, and AI would fall within the STEM bracket.

    On the positive side, politicians and leading people in fields such as philosophy, classics, history, drama, art and science, have been campaigning against the cuts and for wider accessibility to the arts and culture.

    There is a website called “The Conversation”, which claims its articles have academic rigour with journalistic flair. Just keep in mind that the articles aren’t peer reviewed, but they are widely considered balanced and accurate. I find it interesting in this world of one sided and extreme view media. It has articles submitted by academics in various fields around the world and in the past it has had some intelligent voices on AI, history, politics and other topics. If you think you might enjoy some of the pieces you can search for “The Conversation” or click the link below. Unfortunately, it isn’t a place where we can engage the way we do on this website.

     https://theconversation.com/uk

  • I think you are thinking in human terms, i.e. you can only read a small number of books, or know a small subset of information. But the AIs are being trained on millions of books and are reading the entire internet in certain areas. It is a huge difference in scale. They are likely less biased, or at least more aware of the bias, than most people.

    There are differences on the scale of some of the AIs too. There are small ones and big ones. The big ones have much larger models based on far more data.

    But their main benefit is in technical areas, looking for what portion of a gene affects certain diseases, sorting through data rapidly to find new drugs, summarising topics or books, giving overviews, etc.

    Its understanding of the meaning of words or concepts is derived from a probabilistic model based on the context in which it is most used. These are the large language models. They do know about different languages 

    There are concepts and ways of thinking that are better aligned to certain languages. What happens when they swap between them I am not sure, but it is no different to the people that do it, as that what it learnt from.

  • The biases come from what the AI is trained on and the context of that training, if for example it's trained on educational outcomes then it would see south east Asia as being ahead, this has already been seen and shown that it downgrades other groups as being less intelligent. Exam results dont' tell the entire story as can be seen from result within countries, in the UK where some schools have a high number of pupils where English isn't their first language, it shows lower overall grades, but when you look at the starting point of many of the pupils it shows amzing resilence and the ability to catch up quickly. Raw data's not giving the whole story, not by a low shot, would the AI know to ask these questions and seek the answers, where would it seek the answers from? Does consensus reality from raw data override other considerations?

    Ai not being able to experience any emotional, eithical or spiritual ideas, means that it's missing most of what humans mean by intelligence. These are all very important to understanding as opposed to mearly learning. How many of us went through school reciting things by rote? We remembered them, but did we actually understand them? Think of how many children have thought that Gods name is Harrold, and it all comes from the Lords Prayer, 'Our Father who art in heaven, hallowed be thy name' outside of this how often does one hear or use the word "hallowed"? It's no real wonder that so many children have thought the name of god is Harrold, as they're trying to make sense of something, with archaic language and a sentence structure that is unfamiliar. These children have learned the Lords Prayer, but they don't understand it, it's been learned by rote and they just repeat, parrot style what they think they've been taught. Likewise if an AI model has been taugh using American English, would they down grade English English spellings? Mom, or Mum, let alone Mam which is used is large parts of the UK. We already see this in print media, English people getting irritated by American spellings and increasingly vice versa, it might seem not to matter and be insignificant, but if AI were asked to grade spelling tests would it mark English spellings as incorrect if it was trained in American English. The words we use add so much layering and texture to what we want to say that goes way beyond what is found in a dictionary.

  • But people have all the same biases. They learn from the same sources people do 

    There is no reason it can't make the same, and most likely better, connections, as it can draw from a bigger pool of knowledge. Although this is more applicable to technical topics.

    The challenge is when they learn from each other.

    I think they probably have issues with spiritual, ethical or emotional topics as they are unlikely to be able 'feel' anything. This means they are all potential psychopaths.

    Any empathy would be insincere.

  • I think I've watched to much Dr Who to feel comfortable with AI robots doing surgery on me, I think back to the cybermen from the David Tennant era of Dr Who.

    AI is only as clever as the material it's given to learn from, there have already been instances of bias, racial bias in particular, there's probably a gender bias too.

    Where does Ai leave those iof us who want to engage in ideas and expand our thinking? That's something not based around facts alone, but the interpretation of themm context is so important and I'm not sure AI can know the contexts of exploratory thinking, is it only "thinking" inside it's box, can it make meaningful connections across subjects?

  • What's the difference?

    If you put it a locked room so you can't see it, ask it a question and you get the same answer, does it matter?

    Both draw on experience to do some best fit pattern matching based on data, unless you want to consider emotions and instinct, which could be useful or introduce bias.

    By actual intelligence I am assume you mean something living.

    I think the answer depends on context.

    I terms of which gives you the best conversations, the best technical solutions, the best emotional bond, the quickest solutions, the answer with the best odds of success, it is going to vary.

  • I would choose both. AI couldn’t have happened without human intelligence. 

    The long list of benefits of AI has been well publicised now, and one of these benefits is that AI is helping to save lives in hospitals and in the community. 

    Like anything else, it’s about using it correctly to benefit the world.

  • The later, the former seems just a bunch of FAQ's and similar addlepated excuses for not actually engaging with people and making thier lives better. But then I would say that as I don't get on with tech and even apps seem to have to have their own apps theses days, which to me is ridiculous.