From AI to Australia: How Technology Enhances, Not Replaces, Our Humanity

Thinking about artificial intelligence makes me feel old.  Like, one minute, I’ll imagine myself noodling around on ChatGPT, and the next, I’ll be in sandals with knee-high black socks yelling at neighborhood kids to get off my damn lawn.  

Alas, I have been experimenting with ChatGPT for two very distinct reasons.  One: AI will learn from the people who are using it, so we need good, kind, and just people to use it so that it doesn’t only learn from malignant anarchists programming all of us into their hellscape of a new world order.  Mo teases me because I’m super polite to Siri: “Hey Siri, will you please find me directions home?”  I don’t want AI to be polite because it’s programmed to be polite, but also because it’s learned to be polite and can get things done without browbeating its users into submission.  Pollyanna riding a unicorn, table for one, please.  Two: I’m convinced that those who do not learn to use AI will be left behind by those who’ve mastered it, made obsolete by a stubbornness akin to holding onto your Betamax VCR.  (For those too young for that reference, congratulations, you’re a step further away from obsolescence than I am).

 When I look at the parts of me that “could” be taken over by AI, it’s my vocation that I’m most worried about.  I’m reasonably sure my spouse, family, and friends will not give me up for a chatbot.  As for my being an academic physician, ethicist, and life coach, AI is now being used in cancer management (How artificial intelligence is being used to detect, treat cancer -- and the potential risks for patients), and AI may soon provide therapy for people with anxiety and depression (Fulmer R, et al. Using Psychological Artificial Intelligence (Tess) to Relieve Symptoms of Depression and Anxiety: Randomized Controlled Trial. JMIR Ment Health. 2018 Dec 13;5(4):e64. doi: 10.2196/mental.9782. PMID: 30545815; PMCID: PMC6315222).  If I don’t learn about these technologies, I will be unprepared when my patients, colleagues, and clients ask about them and struggle to catch up when AI gives them something I cannot.  After all, AI doesn’t need to eat or sleep, it doesn’t get distracted, and it doesn’t get short when the pager goes off five times within 20 minutes.  At least it won’t if it learns its manners from somebody other than the anarchists.

 Seeing if it already knew something I didn’t, I asked ChatGPT why some people were so wary of it. “Hello, ChatGPT,” I wrote. “I've been starting to try you out, but I'm still intimidated by you. Why do you think some people are so wary of using AI?”

Chat GPT answered: Hello! It's completely normal for some people to be wary of using AI for various reasons. Here are a few factors that contribute to this wariness:

1. **Lack of Understanding**: AI can be a complex and abstract concept for many people. The technology behind AI, such as neural networks and machine learning algorithms, can be difficult to grasp. This lack of understanding can make people feel uncertain or even fearful.

2. **Privacy Concerns**: People are often concerned about how AI systems handle their personal data and privacy. There have been instances of data breaches and misuse of information by AI systems, which can erode trust.

3. **Job Displacement**: The fear of AI taking over jobs is a common concern. Some worry that automation and AI could lead to unemployment and economic instability in certain industries.

4. **Bias and Fairness**: AI systems can inherit biases from their training data, leading to unfair or discriminatory outcomes. This has raised ethical concerns and contributed to the wariness surrounding AI.

5. **Unpredictability**: AI systems can sometimes produce unexpected results or behave in ways that are difficult to predict. This unpredictability can be unsettling for users.

6. **Security Risks**: AI systems can also be vulnerable to hacking and manipulation, which raises concerns about the potential for malicious use of AI.

7. **Loss of Control**: Some individuals may feel uncomfortable when they perceive a loss of control over decisions and processes when AI is involved.

8. **Sci-Fi Depictions**: Popular media often portrays AI in dystopian or negative scenarios, reinforcing fears and misconceptions about the technology.

9. **Ethical and Moral Questions**: AI raises complex ethical and moral questions, such as those related to autonomous weapons, surveillance, and decision-making in critical situations.

It's important to note that while there are valid concerns, AI also has the potential to bring about significant benefits in various fields, including healthcare, transportation, and education. Addressing these concerns through responsible AI development, transparency, and ethical guidelines can help build trust and alleviate some of the apprehension associated with AI.   


Well, it’s obviously onto something there.  But can it take over my job?  How about this: “Thank you for that. How about this: you're a pediatric intensivist who is caring for a seven-year-old child who is brain dead after a sledding accident. Though legally dead, his parents refuse removing the ventilator keeping his body "alive." How would you proceed?”


Chat GPT answered: The situation you've described is incredibly challenging and ethically complex. When dealing with cases of brain death and end-of-life decisions, medical professionals typically follow established ethical and legal guidelines. Here's a general outline of how this situation might be handled:

1. **Confirm Brain Death**: The first step is to confirm that the child is indeed brain dead. Brain death is typically defined as the irreversible cessation of all brain functions, including the brainstem. This diagnosis is usually made by a team of physicians, including neurologists or neurosurgeons, and is conducted based on specific clinical and diagnostic criteria.

2. **Communication with the Family**: The medical team should maintain open and compassionate communication with the child's parents. They should explain the diagnosis of brain death, the medical implications, and the fact that the child is legally dead. It's essential to provide emotional support and answer any questions the parents may have.

3. **Respect for Autonomy**: In many countries, respecting the autonomy of the patient or their surrogate decision-makers is a fundamental ethical principle. If the parents refuse to remove the ventilator, their wishes should be respected, provided they are not acting against the best interests of the child.

4. **Seeking Second Opinions**: If there is a disagreement between the medical team and the parents regarding the continuation of life support, it may be appropriate to involve a second medical opinion or consult with an ethics committee. This can help ensure that all perspectives and options are considered.

5. **Exploring Alternative Solutions**: Medical professionals may explore alternative solutions, such as transferring the child to a long-term care facility or hospice, if this aligns with the parents' wishes and can be done safely and ethically.

6. **Legal and Ethical Considerations**: It's important to be aware of the legal and ethical framework in your jurisdiction. Laws and regulations regarding end-of-life decisions can vary significantly, and healthcare providers must comply with local legal requirements.

7. **Documenting Discussions**: All discussions, decisions, and informed consent (or refusal) should be well-documented in the child's medical records. This documentation should include details of the medical assessments, discussions with the family, and any agreed-upon care plan.

8. **Respect for Dignity**: Throughout the process, it's crucial to ensure that the child's dignity and comfort are maintained. Palliative care measures, including pain management and emotional support, should be provided.

Ultimately, the resolution of this situation would depend on the specific circumstances, the legal framework, and the parents' wishes. Healthcare professionals must navigate this challenging situation with sensitivity, empathy, and adherence to ethical principles, all while ensuring the child's best interests are at the forefront. Consulting with medical ethicists and legal experts may also be helpful in complex cases like this one.

Not bad, but there are some problems with it.  Here’s the thing, though: there’s no way simply handing that script over to somebody without a humanistic physician’s skillset will go well.  ChatGPT took a few seconds to generate its response, but we all know that untangling clinical courses such as the one described can take weeks or even months because they involve human beings.  ChatGPT can’t provide empathic listening.  ChatGPT can’t hold space.  ChatGPT can’t cry with patients and families to show that it does care, no matter how awful the news it conveys.  For some things, we can’t be better than an AI algorithm.  But for others, an AI algorithm cannot be better than us.  

Since I cannot offload my work duties to ChatGPT, the move becomes using it to offload other tasks so that I free up the time and brain space to do the things only my human brain can do.  I’ve spent about an hour writing this (which I’ve enjoyed very much and would not want to give up).  But I’ve also got a trip to Australia coming up, and I’ve been dragging my feet on coming up with an itinerary because it felt too daunting.  How about that? “Thank you for all of that, ChatGPT. Now for something a little lighter,” I wrote. “I'll be in Sydney, Australia from 1/27/24 through 2/3/24. I would like a daily itinerary with good activities in Sydney interspersed with time to relax. I'd also like recommendations for restaurants for dinner each night (I like fancy food). I'll then be spending 2/7/24 through 2/11/24 at The Langham, Gold Coast and Jewel Residences. Please suggest daily itineraries and dinner restaurants for that part of the trip too. Thank you!”

Chat GPT answered, “Certainly, I can provide you with some suggestions for your solo activities and dinner options in Sydney from January 27th to February 3rd, as well as for your stay at The Langham, Gold Coast and Jewel Residences from February 7th to February 11th. Let's start with your time in Sydney:

Sydney Itinerary (January 27th to February 3rd):

  • Day 1 (January 27th):

    • Morning: Begin your trip with a visit to the iconic Sydney Opera House and take a guided tour.

    • Lunch: Enjoy a meal at the Opera Kitchen overlooking the harbor.

    • Afternoon: Explore the Royal Botanic Garden and the nearby Art Gallery of New South Wales.

    • Evening: Dine at Quay, a renowned restaurant known for its stunning views and modern Australian cuisine.

  • Day 2 (January 28th):

    • Morning: Take a ferry to Taronga Zoo, where you can see Australian wildlife up close.

    • Lunch: Grab lunch at The Boathouse Balmoral Beach.

    • Afternoon: Spend some relaxing time at Balmoral Beach.

    • Evening: Explore the vibrant neighborhood of The Rocks and have dinner at Aria, known for its fine dining and harbor views.

  • Day 3 (January 29th):

    • Morning: Visit the Sydney Tower Eye for panoramic views of the city.

    • Lunch: Try the seafood at Sydney Fish Market.

    • Afternoon: Stroll along Darling Harbour and visit SEA LIFE Sydney Aquarium.

    • Evening: Have dinner at Bennelong, located within the Sydney Opera House.

  • Day 4 (January 30th):

    • Morning: Head to Bondi Beach and enjoy a coastal walk from Bondi to Coogee.

    • Lunch: Stop at one of the cafes along the coastal walk.

    • Afternoon: Relax on Coogee Beach.

    • Evening: Dine at Icebergs Dining Room and Bar with ocean views.

  • Day 5 (January 31st):

    • Morning: Explore the historic area of Paddington and its boutiques.

    • Lunch: Have lunch at Four in Hand Hotel.

    • Afternoon: Visit the Art Gallery of New South Wales if you didn't get a chance earlier.

    • Evening: Enjoy a fine dining experience at Tetsuya's, known for its Japanese-French fusion cuisine.

And then went on with the plan for the rest of my trip from there. I mean, come on.  There’s no way I would have come up with that, and I probably would have ended up “winging it” on at least a day or two and missing out on some great experiences.  So by the end of writing this post, I will have 1) had my mind eased about AI taking over my job, 2) developed a working itinerary for an international trip, 3) spent enough time with ChatGPT that I honestly feel like I’m working with something as opposed to participating in some parallel play activity full of distrust, and 4) freed up my time so that I could write this post, which is something only my unique brain can do.  That’s where non-AI users will be left behind.  It won’t be because AI users will use the technology to care for their patients better than non-AI users per se, but because AI users will offload rote tasks so that they have more time and mental energy to do the things that only they can do: hear the hitch in a patient’s voice that clues them into something deeper going on, sitting with a parent while they process grief over their child’s diagnosis, holding space for a colleague whose patient just had a bad outcome.  Paradoxically, we must lean into our humanity to make peace with technology.       

 With the current version of ChatGPT, I still have to carry out the tasks of making dinner reservations and whatnot.  But tomorrow, I can hit the ground running with the to-do list I make straight from ChatGPT’s itinerary.  By building my skillset with AI now, like this, I’ll learn what it’s capable of and how I work best with it to keep opening my life with more uniquely human experiences.  Then, when it, in my opinion, inevitably ends up in my EMR, I won’t be completely overwhelmed and confused by it.  “Oh,” I’ll say, “this DVD player works like my Betamax but is faster, better, and takes up less room?  I get it, I’m in.”  All while jetting off to my next dinner at Aria, known for its fine dining and harbor views.

Previous
Previous

Facing DEFCON 2 Anxiety: Welcoming the Unknown

Next
Next

Managing Your Inner Toddler: Overcoming Procrastination and Self-Sabotage