Do you use AI?

I don't, but I know lots do and I wonder why? I find AI sumeries when I do searches limited and irritating and they just take up space on the page.

I can't imagine talking to Ai about personal problems, or even something simple like where to get trousers the right length.

I've seen some of our posts put through AI and I'm not sure how I feel about it if I'm honest. If it has to learn then I guess we're better teachers than some, but what does it ultimately do with our conversations?

What does it do with our feelings and emotions, it can't feel or emote, isn't it rather like a mask talking to a mask?

  • I like playing around with Google Gemini Live and Meta AI, both speaking and listening functions not so much the text only option (although that is handy to look back on, copy etc if I want) Slight smileThumbsup

  • Thanks for all you replies, I'm still not sure I understand any of it any better, so it's probably another thing I'll leave alone

  • I do use AI, and it's totally different to what I expected. It's extraordinarily bad at getting facts right. But it's surprisingly good at more 'human' things like writing, or giving life advice. I use the chatbot on my work PC as a kind of life coach and it's very good at it. I also find it useful for recommending films or matching wine to food.

    My personal brand of autism means I'm not especially good at empathy, emotional intelligence or communication, so it may be a case of my not spotting AI's inadequacies because I share them. I often feel like a replicant from Blade Runner chatting with another replicant when I use AI. But there's no eye contact, I can write instead of speak and I can leave AI's answer on my screen for hours until I feel like reading it. Dare I say it's something that people on my part of the spectrum will probably find quite appealing?

  • I am making increasing use of AI in an academic context.  Back in the day, one of my lecturers used to refer to the German approach to theological research as a culture of megaminds and brain slaves, and I have come to think of AI as my personal brain slave. To put it differently, AI is like a rather unreliable but very speedy research assistant. So, it can save me a tremendous amount of time by doing literature surveys for me or summarizing texts that I upload to it (not an alternative to reading the text myself, but useful way of locating the parts that are most relevant for my purposes).

  • If you want a summary without clicking on anything, or worrying about any cookies and all the tracking nonsense, or paywalls, then it is easier and a lot quicker.

    If you want to click on 10 sites, sort out the cookies and ead through the stuff yourself, then ok. If you have all day and like opening lots of tabs and don't get lost, then don't use it. I think you can avoid it in he browser if you want.

    It should help your searches return what you are interested in, rather than the random stuff you report you get.

  • It would depend on what you were searching. You may get some the same. But you should get more accurate results. You shouldn't get irrelevant things like air fryers if that isn't what you have searched for. You would also be able to tell it that you didn't want air fryers and then it would search again but eliminate air fryers this time. You cannot ask Google to refine it's search.

    If you give me an example of something you would want to search for I may be able to make a little more clear for you.

  • Here's my understanding of it,  . 

    A generative AI prompt might be something like 'Write me a UK English summary in under 500 words of all the available research on AuDHD burnout, in clear words suitable for a non-technical audience and with a bibliography."

    The equivalent search might be 'simple summary of research into AuDHD burnout'.

    The difference is the generative AI prompt looks at all the research it can find and generates (writes) the type of summary you ask for, while the search brings back all the summaries it can find and you have to work through them to find the best or make a 'greatest hits" one.

  • So I'll get the same old crap with a nicer explaination then?

  • For fun, maybe. Not for professional or creative gain, though. I think those kinds of people are the worst

  • Why I was thinking about this question: 

    I wondered whether AI research / design / technical teams supporting non-Romanic / Latin script users might have taken exactly the same, or quite a different approach their platforms with regard to:

    • development, 
    • service implementation,
    • user customisation of functionality,
    • support available to their user audiences,
    • whether the majority of their user groups are social / business / academic / public sector.

    There are thousands of languages which do not use the Roman (Latin) script. 

    These languages utilise a wide variety of writing systems including: 

    • alphabets, 
    • abjads, 
    • syllabaries,
    • logographies. 

    Examples of non-Roman script languages, (grouped by script type).

    1. Logographic / Character-Based Systems:

    Chinese (Mandarin and Cantonese): uses Hànzì characters.

    Japanese: uses a combination of Kanji (logographic), Hiragana, and Katakana (syllabaries). 

    2. Abjads (Consonant-Only Scripts):

    Arabic: used for Arabic and, in adapted forms, languages such as Farsi / Persian and Urdu. These languages are written right-to-left.

    Hebrew: uses the Hebrew alphabet and is written right-to-left.

    3. Abugidas (Syllabic-Based Scripts):

    Amharic: uses the Ge'ez script, which is the official language of Ethiopia.

    Hindi: uses the Devanagari script.

    Tamil: uses the Tamil script.

    Thai: uses the Thai alphabet.

    Tibetan: uses the Tibetan script. 

    4. Alphabets (As distinct from Latin):

    Armenian: uses the Armenian alphabet.

    Georgian: uses the Mkhedruli script.

    Greek: uses the Greek alphabet, which predates and is related to the Roman alphabet.

    Korean: uses Hangul, a featural script where letters are designed to represent the place of articulation in the mouth. 

    Russian: uses the Cyrillic script.

    5. Syllabaries:

    Cherokee: uses a dedicated syllabary system (ᏣᎳᎩ).

  • They continuously update the models. I think most of the cases were from a year or two ago.

    I have found it is quite sensitive and over recommends getting external help now. I had it keep checking I was ok and putting up messages for things I thought were quite innocuous.

    On chatGPT there are two levels. There are what it picks up on and embeds on the text. Then there are additional windows (on the phone app) there are triggered based on the content.

    Could you get it to say something inappropriate? Probably, but it is not easy. And if you are pressing it to play a role, then you are also distorting things. 

    Can you be seduced into anthropomorphising it and attributing to it more capability than it has, yes. It is hard to keep at arms length.

    I believe they have changed the personality elements of it to change the feel.

  • I caught myself wondering - anyone experienced trying to use AI in other than English, or Romanic script?

  • That’s what they are supposed to do, but if pressed and made into a lover/confidante/etc they can break down and actually recommend it.

  • I find that curious because my experience has been it stops and tells you to call a crisis number and gives you numbers to call if you say anything that could be leading towards suicide 

    I would be interested in knowing what was typed and the results it gave 

  • No. Nope. Nuh uh. No thanks.

    [Trigger warning ahead.]

    I’ve read a few too many articles on how AI has led folk to suicide to ever ask anything remotely personal of an AI assistant.

  • I put the question, 'AI, how do you respond to questions' and the following answer came up:

    As an AI, I respond to questions by analyzing natural language queries through machine learning models trained on vast datasets. I process your prompt to understand the intent, retrieve relevant information, and generate a human-like response using Natural Language Generation (NLG) techniques. 
    Here is a breakdown of how I generate responses:
    • Understanding the Query (NLU): I break down inputs into components (nouns, verbs, entities) to understand the intent behind your question.
    • Retrieval-Augmented Generation (RAG): For many questions, I perform a search to pull up-to-date information from diverse sources, ensuring accuracy rather than relying solely on pre-trained knowledge.
    • Contextual Analysis: I analyze the context of your question, including previous parts of our conversation, to provide coherent, follow-up answers.
    • Content Synthesis: I generate a tailored answer based on the context, which can be modified for tone (e.g., formal, friendly) or format (e.g., summary, step-by-step instructions).
    • Multimodal Capabilities: I can interpret text, images, and documents to provide answers. 
    • How to get the best responses:
    • Be specific: Clear, detailed questions yield better, more accurate answers.
    • Provide context: Tell me who you are or what you are trying to achieve to tailor the response.
    • Double-check: While advanced, AI can make mistakes, so verifying important facts is recommended. 
  • I I asked AI to do a search would I get different results to when I do a google search or would I still get airfryers and a load of other stuff I don't want?

    THank you all for your answers, but I still don't feel any clearer about it that I did before