Is AI Good or bad?

Is AI a good idea or a bad idea? 

Parents
  • I agree — AI isn’t inherently good or bad.

    It is also not truly intelligent. It doesn’t understand or reason, It predicts and generates based on patterns in vast datasets. Still, it’s highly sophisticated and evolving fast, with wide applications — from medical research and co-pilot coding to music, image, and video generation, and even decoding animal communication like whale codas.(CETI/language galaxy)

    In healthcare, AI is already proving valuable. It helps with early detection, supports overstretched systems, and assists specialists in screening tasks with impressive speed and accuracy.

    There are real risks too. These include -high energy use, job displacement, copyright infringement, fraud, impersonation, propaganda, and democracy manipulation. AI can also produce false but confident outputs — often called hallucinations — and many models lack transparency. Bias in training data can lead to skewed or unfair results, especially for underrepresented groups. 

    AI gives very powerful tools to millions around the world, which is a huge benefi?  But it also floods the internet with low-quality, repetitive content — websites, music, images, and articles — devaluing original human work. This especially affects writers, and creatives. 

    I’ve worked extensively with language models, image generators, and music tools. It’s easy to get generic output, but creating something original or meaningful takes time, effort, and skill, especially in prompt craft. 

    A growing concern is that AI-generated content is now being used to train future models. This creates feedback loops that reduce quality over time. It’s a form of data contamination that can weaken accuracy, diversity, and creativity. Like technical debt in code, it introduces a kind of data debt — where short-term gains lead to long-term fragility. As this builds up, models may become harder to trust, improve, or audit.

    One of the most exciting uses for me is autism support. I’ve trained a text-to-speech model on my own voice, which helps me communicate more naturally — and it can even sing. I’m also interested in developing an ASD filter —. With enough care and input, ASD-focused models could lead to real breakthroughs in understanding and accessibility?

    These are my words and ideas, cleaned up by ChatGPT — which you can usually spot by its fondness for long-form em dashes Slight smile

Reply
  • I agree — AI isn’t inherently good or bad.

    It is also not truly intelligent. It doesn’t understand or reason, It predicts and generates based on patterns in vast datasets. Still, it’s highly sophisticated and evolving fast, with wide applications — from medical research and co-pilot coding to music, image, and video generation, and even decoding animal communication like whale codas.(CETI/language galaxy)

    In healthcare, AI is already proving valuable. It helps with early detection, supports overstretched systems, and assists specialists in screening tasks with impressive speed and accuracy.

    There are real risks too. These include -high energy use, job displacement, copyright infringement, fraud, impersonation, propaganda, and democracy manipulation. AI can also produce false but confident outputs — often called hallucinations — and many models lack transparency. Bias in training data can lead to skewed or unfair results, especially for underrepresented groups. 

    AI gives very powerful tools to millions around the world, which is a huge benefi?  But it also floods the internet with low-quality, repetitive content — websites, music, images, and articles — devaluing original human work. This especially affects writers, and creatives. 

    I’ve worked extensively with language models, image generators, and music tools. It’s easy to get generic output, but creating something original or meaningful takes time, effort, and skill, especially in prompt craft. 

    A growing concern is that AI-generated content is now being used to train future models. This creates feedback loops that reduce quality over time. It’s a form of data contamination that can weaken accuracy, diversity, and creativity. Like technical debt in code, it introduces a kind of data debt — where short-term gains lead to long-term fragility. As this builds up, models may become harder to trust, improve, or audit.

    One of the most exciting uses for me is autism support. I’ve trained a text-to-speech model on my own voice, which helps me communicate more naturally — and it can even sing. I’m also interested in developing an ASD filter —. With enough care and input, ASD-focused models could lead to real breakthroughs in understanding and accessibility?

    These are my words and ideas, cleaned up by ChatGPT — which you can usually spot by its fondness for long-form em dashes Slight smile

Children
No Data