Tuesday, December 27, 2022

GPT Tools and Thoughts on Education

 In my prior post it was evident that I was quite impressed by the LaMDA chatbot. Since that experience I encountered a number of other surprises: 

First was that some of the Google guys of LaMDA started Character.ai and made it possible for anyone to experiment with building their own bot to see what happens. I did it, and you can check the last iteration of the bot "Front" here (it is worth your while to create a login).  [Sidebar: my first bot implementation was as Yoda of my novel "The Yoda Machine", but the network knew nothing of my novel and lots unrelated about the Star Wars character, so it was untrainable for my purposes.]

Shortly after that I encountered chatGPT the bot made by Openai.com and opened to the public for experimentation which you can try here (it is worth your while to create a login). I have tried that a fair bit. Smart move on their part as they get the whole world to train their network.

Last but not least I stumbled upon a post in NY Post by professor Darren Hick who encountered his first provable case of plagiarism-by-bot and documented its uncovering. He detailed it further in his FB page.

My conclusion from all this is that GPT for now is nothing more than a language manipulator limited by the factual information it has. It writes incredibly nicely structured gibberish from what it knows to satisfy the request it receives. That is how  Prof. Hick uncovered his guilty student. ChatGPT was writing BS that could be uncovered by somebody knowledgeable of the subject matter and not distracted by the form. Any ignorant person would be in the opposite position. Any lazy student would not check what the bot wrote. 

The problem professor Hick correctly identified is that as time goes on ChatGPT will acquire the necessary information (about Hume in his case), if anybody chats with the bot about it (Hume), so that, eventually, it will be able to write something credible even to him.

This brings me to a subject that I have touched repeatedly in this blog over many years which is the use of mind maps as a means of communication, summarization and organization. In the example above professor Hick asked students to "write a 500-word essay on the 18th-century philosopher David Hume and the paradox of horror" and chatGPT spit out 500 words of beautifully worded gibberish.

What if the request had been to produce a mindmap that summarizes concepts and supporting and dissenting arguments for Hume's Paradox of Horror with each node containing no more than 20 words?  I tried my hand at a very abbreviated and unresearched version of it below inadequate for a student. I am not a student to trying to pass his course (72yo, semi-retired SW developer, former prof, and still a researcher).

In my opinion, these outcomes would follow such a request:

as of today, no tool like chatGPT could produce something like this 

Students would have the incentive to learn to summarize their thoughts instead of embellishing words for the sake of increasing the work-count

structuring thoughts as in an outline is the key to effective verbal and written communication

Anyone could use the best of those maps as a cheat sheet for quick learning about Hume

Hume's Paradox of Horror (bogus) Map 

Another thought regarding tools. 

In the 70s, before PCs and Visicalc and lastly, Excel were either invented or became popular, the use of any such tools for an assignment would have been considered cheating. Eventually, people caught up with technology (I taught using Excel and fundamentals of database systems at Seattle University in 1983) and today we take it for granted that much math work not only can but should be done with spreadsheets. So it will be with text generators and it will become necessary to figure out how to grade the intelligent use of tools such as chatGPT. Clearly in this case the student not only did not know the subject matter but did not bother to find out what garbage chatGPT had created, so she failed on multiple fronts due to laziness, but kudos for trying to use a new tool. 

As time goes on perhaps being able to give appropriate instructions to a bot so that it can generate the MINIMUM amount of text to COMPREHENSIVELY explain the matter will become the rule. As with word processing versus handwriting, why write if you don't need to but you can think effectively? We are not there but it will come.

Full disclosure: Much of this post is written with dictation and tools for automatic error corrections since I have familial tremor of the hands that makes writing a painful chore

Tuesday, June 14, 2022

I met the future I had imagined, and it blew my mind

In a recent post I wrote about my novel The Yoda Machine. It was e-published (Amazon, Google, Kobo, etc) in 2018 and published in paperback (Amazon) in 2021. I wrote it between 2007 and 2014 when these ideas were still well into the future. It is essentially a conversation between Yoda, an "AI socratic teacher" (a bot in these days' terminology), and a young child in the world of 2064.

Today I discovered I was miserably mistaken. That bot is here today and I met it in a post titled Is LaMDA Sentient? — an Interview by Blake Lemoine an AI engineer and ethicist at Google that appears to have gotten on the wrong side of his employer (read here)

The conversation of Lemoine and his associate with LaMDA, the sentient bot is so similar to the dialogues between Yoda (my bot) and Darlene, the young woman in my novel, that I have been having goose bumps for the last hour.