In this second conversation with Georgia educator Daniel Rivera, some outstanding guidance is provided for anyone who wants to use Artificial Intelligence, whether they consider themselves an expert or a novice. Some specific AI tools are mentioned, and Daniel offers some interesting perspectives regarding the ways AI is connected to the energy sector, politics, and economic growth. After sharing with us some of the great things AI can do and will do (Neuralink, Blindsight), Daniel ends his remarks with a universal truth about great teachers and the value of relationships, which AI cannot match.
4:45 – two (maybe three) rules for AI prompts
5:15 – Rule 0 – mindset
5:45 – Rule 1 - be clear and specific
8:05 – don’t be discouraged
8:25 – Rule 2 - have a conversation
10:00 – keep going, don’t settle
10:50 – the Magic School conundrum
14:00 – Khanmigo – one for teachers and one for students
15:15 – Khanmigo will not provide answers – it’s a tutor
16:15 – Microsoft Copilot
16:35 – Coach.microsoft (reading support)
17:45 – Perplexity (powered by Claude and by ChatGPT)
19:15 – to increase the quality of student work, give them an audience
20:35 – students have stories to tell and they just don’t know how
21:00 – music, curiosity, passion, engagement, poetry, content areas
22:00 – ChatGPT is the Coca-Cola of AI
22:30 – there are a lot of AI chatbot options available, and a number are free
23:45 – image, audio, video “categories” of AI
24:30 – exponential vs. additive potential of AI growth
27:05 – machine learning, language comprehension, image recognition
28:00 – Neuralink – a brain interface chip – drive a computer with your mind alone
28:45 – Blindsight – resolution improving and possibly humans with infared vision
30:30 – the connection between and mutual dependence across: \Power the energy sector, AI data and power consumption, national security, and climate concerns
32:25 – data sets (prior knowledge), compute power (processing time or general intelligence + effort), algorithms for training (teaching, formative assessment)
34:40 – how AI entered the most recent presidential election conversation
35:30 – military, environmental, academic, geopolitical, and economic growth concerns are inextricably connected with AI
39:45 – Donald Dowdy, high school band director
40:40 – Bruce Little, Art Education Practicum instructor, Georgia Southern University
42:30 – honor, discipline, respect, the craft of teaching
43:25 – You can’t replace relationships with AI
Coach (Microsoft - reading support)
Background image on cover is by Albert Stoynov, on Unsplash. This image replaces the standard cover art by Simon Berger (details in the footer).
David (00:11):
What matters most in learning the challenge, the thrill, the benefits, interacting with other people or something else entirely. What is the connection between leading and learning? Does change drive learning or does learning drive change? What's more important, teaching or learning? Is everyone a leader, a learner, a teacher want answers? Listen in as we address these intriguing issues through commentary and with guests who share their thinking and tell us their stories. Lead, learn, change,
Daniel (00.56)
Think big. Think you're talking to a genie. You can ask for anything that's rule zero. That's your mindset. Rule one is be clear and specific about what you want. The more you kick back and say, refine or what about this or anything like that, you get ridiculously better results. And as long as your visual cortex is intact, meaning you could have been born blind, you will be able to see those people. They brought a certain level of rigor and professionalism and in their own way. And all of those were based on relationships. You cannot replace those with ai. You just can't.
David (01:46):
Today's guest on lead learn change is Daniel Rivera. Daniel, thanks for taking your time to speak with me again.
Daniel (01:53):
Oh, thank you for having me again.
David (01:55):
The first episode with Daniel – and there's a link in the show notes – was filled with valuable information for anybody who wanted to learn about artificial intelligence. You don't want to miss that segment because we discussed AI prompts, AI hallucination and skipping school, among other things, and we had a chance to learn more about AI from Daniel, a Google certified trainer and innovator. So we're going to take advantage of that opportunity. Today's episode includes more ideas about how learning and teaching can be supported with AI. So Daniel, are you ready to get started with round two?
Daniel (02:27):
Yes, I am. Thank you for having me. Again,
David (02:29):
We ended that earlier episode talking about prompts and how important it is for our input to be very clear and specific, and we even talked about what the perspective is, “place yourself in the position of a student who's new to the area” or “is disinterested” or whatever. This takes a bit of practice and my first few attempts at using AI to provide me with some summary information, which is really all I thought AI could do at the beginning. I was like, “Hey, it can summarize some stuff.” It left me kind of disappointed, but then I realized I could ask again and again and again and the practice of crafting my prompt and getting good at that can really pay off. So just this past week, I had knee surgery about five weeks ago and one more PT session to go and I'm trying to get back to the gym and I obviously have some things to do differently this time around.
(03:20):
And I decided to do my homework that the PT gave me by using AI and the goal was come up with some workouts that are going to work, all big buckets of exercises, compound exercises, athletic exercises, warmups at the beginning, et cetera, and over time would work everything. I went to AI and took all the notes I'd taken from all my PT sessions and everything I knew myself and it took about four iterations, but I ended up with six workouts and took those to my PT and just said, “well, here's what I came up with.” She was like, “Wow, man, this is really, really good stuff. I'm skimming through this. I can't really think of anything I would add to this.” And then I said, “Well, here's the dialogue I had with chat GPT-4o on this to generate it.” It was better than anything I could have done in days on my own and probably [would have] taken her hours and hours to pull this together and not be able to move it around real quick. So just working on writing the prompts is really, really huge. And it might seem like it's more trouble than it's worth if somebody's just getting started, but the learning can be really quick for that part of it. I say all that to say, what do you say to somebody who wants to experiment a little and not be overwhelmed?
Daniel (04:41):
Right. Well, I would say keep it simple. So I've seen a lot of lists of 50 great prompts for middle school science teachers and 10 great prompts for counselors. Those are good to get you ideas. It's like giving you fish, but I think it's better to learn to fish. And so I've been looking at a lot of these prompts and I've got two big rules for prompting. Now these rules are big, big rules that they include a lot of sub points if you care to do them, but at the very surface level, they're easy to understand. There's really a third rule. Rule zero is “think less menus, more genies.” Look at the last session where I explained that in great detail, but think big. Think you're talking to a genie. You can ask for anything. Okay, that's rule zero. That's your mindset. Then you have rule one and rule two, those are this.
(05:35):
Rule one is be clear and specific about what you want. Now we're used to talking to Google or Siri or something and we have to be fairly short in what we say, so much so that we use boolean searches or hashtags and keywords in our searches. But for the first time, we can talk to this thing as if it was a human and we can explain as much as we want, especially on the paid plan or the paid features that you can use occasionally or a few times a day on the free plan. So most of these chatbots are now sophisticated enough to be able to accept paragraphs of input as a question. So you can explain tone, length. I want you to act as Abraham Lincoln. I want you to use the narrative voice of Scooby-Doo. I want you to answer in iambic pentameter, whatever it is you want.
(06:29):
And then context. We've had a lot of problems with this. This is the fifth time I've had to talk to this teacher. This student has had some issues with this and this, and this kid loves this and this and this, whatever. And specificity, all that stuff. Just give it everything you can possibly give it including, and this is a question I've asked a few times and it's always turned out really good for me. What else do you need to know from me to do a good job? That's crazy, right? You're not used to, but we say that to people. We say, what else do you need to know? And sometimes it'll say, I got you. And other times it'll kick back three or four questions that are good that if you answer them, it'll do an even better job. So be clear and specific and give it as much information you can. Even if that means asking it if it needs more information.
David (07:18):
This past week when I worked on the workout plan, it stopped halfway through and said, is this what you meant? Would you like me to do the three workouts in this same format or give me the adjustments that you want to make. And I looked at it and it was spot on and it was like, okay, go ahead, do the next.
David (07:39):
I said, “Yes, do the next three, and then it did it and said anything else? I said, well, now that you think about it, now that I think about it, every time I see an answer, I have thought of something. So break these up for me and give me two workouts where I'm not at the gym and all I have access to are some power blocks and a floor mat and body weight. That's it. Give me two workouts with all the same muscle groups.” And it did it.
Daniel (08:05):
Yeah, yeah, exactly right. So you were not asking enough at the beginning, and that's fine. That's a skill that takes time. We're going to start off with real basic usages. Don't be discouraged and you're going to get better and better and better at the prompting. That's rule one. Be clear and specific. Rule two is, and again, there's only two rules. Rule two is have a conversation. And what I mean by that is we're used to something like Google where you do a query and it gives you the answer and you don't get to refine the query. You can just do another query if you need to. In fact, most people don't even go to the second page of Google results. This is not Uber Eats. You're not just ordering an order, it gets delivered and you're done. Think of this like a game of tennis.
(08:52):
You want to keep that volley going as long as possible. The more you kick back and say, “refine” or “what about this?” Or “Hey, I got some pushback with this or I disagree” or anything like that, you get ridiculously better results. So you can say, “Kkay, that's a great start for a lesson. That first day, 20 minutes introduction here. Can you give me an example of what that is? Can you give me the basic beats that I need to put in for the presentation? Can you give me some videos for that? Generate a work sample. Give me a rubric to grade that work sample. I don't like that. Let's make it a 25 point rubric. Oh, actually, can you make those points a one, three and five scale? So I can have a two or a four in between? Oh, I forgot. Include a zero point on the things in case the students don't do it at all.
(09:42):
Oh, hey, give me a student work sample that will pass that with flying colors. Give me a perfect score now give me one that's going to fail it. So I've got some exemplars. You know what? It strikes me that they may be able to pass this rubric and not master the standard. Is it possible? Oh, it is. Can you adjust the rubric points setting so that it is impossible for them to pass this without passing the standard?” Sure. So now we're looking at reliability, we're looking at accuracy and all that we you keep going. And the more you do this, the more you get into this habit where I'm not having to upset this thing, right? It's not going to get its feelings hurt and I'm not settling for the first thing it gives me. And that also helps correct my lack of specificity I may have given it, which is what you were showing with your workout plan.
(10:29):
You continue to have a conversation with it which helped refine and refine down to kind of what you really wanted at the beginning but you didn't realize you wanted. And so all this is magic, but that's the two things. That's it. So let me get to make a slight change in here, but I need to explain this. This is where we run into what I call the Magic School problem. I like Magic School. Let me just say that for the record. I think it's a seven out of 10. I think people think it's an 11 out of 10 because it's magic and it's school and it's friendly and it's colorful and it's so cool. It's got these great tools there. And if you're a novice to AI, yeah, that's awesome. And the iPhone 1 was amazing when it first came out, but now we have an iPhone 15 or 16 . . . I'm not an iPhone user.
(11:26):
So Magic School, especially if you're on the free plan, unfortunately kind of violates all three of my principles, my rule zero of genie's versus menus because when you open it up, there's 72 tools. Well now as a menu, how likely are you to just have a conversation, an open-ended freeform creative conversation with Magic School? Unlikely. They do have Araina chatbot there, but on the free plan that's run by 3.5. And so you're not getting the best brain anyways. And when you start using the tools on the free plan, you can't make any adjustments. Well, when you fill out the tool, there is a box for like “other”on most of these tools. But again, since you've already spent some time filling out the grade level and the standard and so on, you're already in that “I'm filling out a form” mindset. You're in a menu mindset.
(12:20):
You're not in a genie mindset. So its own interfacing UI encourages menu thinking, not genie thinking. Second, you're not thinking, let me give it a ton of information and be super specific. I've already filled out a couple of fields and then the rest is like anything else and you may not think to add in all these other conditions you want. And then the third one, if you're on the free plan, you cannot revise what it gives you. You have to be on the paid plan. So that whole conversation aspect is out the window too. Magic School is great for beginners and it's great for convenience and it's great for a specific tool that it does really well. Like the YouTube summary, one, or the YouTube quiz generator, there's some really good stuff on Magic School. Again, I approve of it, it's a seven out of 10, but it does tend to feel a bit more like a Snickers bar and little less than baked chicken with broccoli, what we should be eating and long-term we might get hooked on Magic School and never go to ChatGPT because you're starting to see what real magic you can do when you grow in your ability to prompt and you use this thing like a genie.
David (13:31):
Absolutely. Let's talk about some specific tools then and spend a minute or so on each one. We'll put links in the notes for these tools as well. So Khanmigo, that's K-H-A-N-M-I-G-O for people listening. So amigo.ai/teachers, I've heard you say this is fantastic for parents. It's relatively inexpensive, less than a cup of coffee a month. And we've got one really cool feature that I'm sure you'll mention that sort of sets it apart from some of the other AI tools,
Speaker 2 (14:06):
Right? So I should make a note that there's Khanmigo for teachers, that's the URL you gave. And then Khanmigo for students. Khanmigo for Teachers is currently free for all teachers and it's powered by Chat 4, the big brain, not the cheapo 3.5 brain. And that's again the difference between a high school and elementary student in terms of intelligence. So you want that GPT-4 if possible. So right there it's a little bit better than Magic School. However, Magic School does have a lot more tools than Khanmigo does for teachers, but Khanmigo's free. So there's a big one right there. When you are using the free version, you do get that better brain. Now for students, that's $4 a month and that's a private license, an individual license. And what it does is it lives next to all the video content at Khan Academy and all that's free and the lessons and all that are free for the kids.
(15:02):
Khanmigo is a little bot that just shows up next to all that stuff. But you can also just access it any time on its own and it will not tell you the answer. You can ask it to solve the answer or write paper and it will just refuse to do that. So it's ChatGPT with specialized training on top of it and it's kind of a walled garden and it's a really good way to get kids to use AI and not use it incorrectly. It acts as a tutor as it should. And the teachers or the parents, depending on if you're using this as a district or using this individually as a parent with your kid, they can see the chat history of everything the kids ever talked with the ai. For the districts, the pricing kind of varies, but it can get as high as $35 to student annually and a little bit lower depending on the deals and whatever else you talk to them about. But even at $35 annually, that's way cheaper than the $48 annually that you'd be paying individually and you're looking at a thousand students will cost you $35K for a school district. A thousand students will have a 24/7 tutor in every subject powered by the best brain we have on the planet GPT-4. And it will not tell them the answer.
David (16:11):
That's great. That's crazy. There's others. So coach.microsoft.com and copilot.
Daniel (16:18):
So copilot is similar to ChatGPT, it's powered by ChatGPT. It has some of the same features, you can use it a few times a day for free. It's a nice alternative to paying for ChatGPT, but if you're paying for ChatGPT, just use ChatGPT, if you're not paying then Copilot is not a bad option. Now Coach Microsoft is Microsoft reading coach and it is a reading tutor for students and it's primarily targeted towards elementary grades, but they log in the students - you create accounts for them - and they log in and start reading some stories at whatever grade level they select. And it can even use AI to generate some of those stories. And then it analyzes their reading and tells 'em what words that they've mispronounced and how much that gives them percent accuracy score and then it walks 'em through practicing those words and getting them fluent in those words so they can do better. So you can have 30 kids all reading and analyzing them and you get a dashboard report of how the kids are doing and it walks 'em through trying to learn those words. There's other options in there in terms of the font, the colors, dyslexia mode, stuff like that that can really make the reading more enjoyable or more possible for the students too. And it's free.
David (17:27):
Perplexity, which I know is more high school and research based.
Daniel (17:32):
Yeah, it's 13 and up and it's a chat bot similar to ChatGPT. It's powered by Claude and ChatGPT. So you might say, why don't I just use ChatGPT. The difference is it does switch between Claude and ChatGPT, whichever tool is better. And Perplexity lets you specify. It's basically a search engine with AI digesting the results of the webpages it finds, which is crazy good. So you could say, “ I need to search about current nuclear reactors in the US and which ones are being built and which ones are way far off. What's our energy viability in the next three years?” You could say “restrict this to YouTube. I only want to see videos” or “restrict this to social media. I want to hear what the people are talking about” or “restrict this to academic journals only.” You can do that kind of stuff when you do a search and then it gives you all its sources, it digests, it gives you a summary. You can then talk to it about that information just like you can with ChatGPT except every claim it makes, every point it makes, it cites with a link to the source where it got that info. So you're greatly reducing your hallucinations or inaccuracies and you can choose whether you trust the source or not.
David (18:43):
This stuff feels compelling, it pulls you in and you want to try the next thing, which is what you're trying to get students to do is they want to try something else.
Daniel (18:52):
Can you imagine telling the students, “Hey, we're all going to write, we're going to do some poetry and we're going to write songs. And if you do a really good job, we'll turn into an actual song that you can download and share with your friends or family or girlfriend” or whatever – that produces a product that they will not throw in the trash.
(19:13):
And again, you want to increase the quality of student work, give them an audience. The best example of that you can see every day high school marching band, they have a product that they produce for the public every Friday night and you get 80 to a hundred kids in a classroom. Now we're not talking about a class size of 30 all with expensive noisemakers, and you get a random sampling of every kid in the school, some of the best, some of the worst. And you have to have a 70% accuracy by the first week of school. They come to summer school to do this and they do this outside, and move and walk at the same time while they're playing all this and they've got it memorized by the end of the semester, it's all memorized with a 90% accuracy. At least. You ever hear a band that misses one out of every 10 notes?
(20:06):
It's pretty rough. So at least 90% accuracy if not 95 or 98. And you're doing that into the class size of 80 to a hundred. Incredible. How do they accomplish that? There's a lot going on there, but one of the biggest things is it's a real audience for their work. When I see someone write a song in Suno for their loved one who's in the hospital or for their wife, for their anniversary or for their son who's getting married, I mean man, man, there's so many kids out there with stories that they want to tell and they just don't know how because they're not skilled enough to do so because they haven't had the right kind of language arts education or their own home life or society has kept them from learning. So many kids understand music. Music speaks to them, especially in teen years. And we could be writing songs all the time and these kids could be super engaged and – no, I'm sorry, super curious, super passionate, thus –causing them to be super engaged
David (21:12):
And it's poetry, so therefore it's presentation, and it's public speaking, and it's dialogue, and it's grammar. And then depending on content, it's - fill in the blank on - any other subject area you wish. So it sounds like some of these tools are more sweeping than others and some are very tailored for either a group or age or type of interaction. Just like Khanmigo doesn't provide answers and Claude AI is more like ChatCPT and some of these other two use these large language models as their interface to do what you want them to do.
(21:47):
I think when I heard you a while back, you said chat, GPT is the Coca-Cola of AI
Daniel (21:54):
That's correct.
David (21:55):
Explain what you mean by that.
Daniel (21:57):
Right. ChatGPT is just kind of the household name. It's well known. I'm from Georgia, so Coca-Cola is king here. No one drinks Pepsi if they can help it, it's irrational. It's just what it is. But the point is everyone uses Coke - or Cokes - as a generic term for soda. So we don't say pop, we don't say soda, we say Coke. So if you say I need to cut back on drinking so many Cokes, you could mean Mountain Dew. I mean it doesn't matter. It just means soda. And so what's happening, I'm starting to see that with ChatGPT. They're like, oh, I just used ChatGPT. And it's kind of what everyone knows, but they may be saying Perplexity or they may be talking about Claude. There are other things out there and again and again like I'll mention Claude or Perplexity and no one's heard of it, at least not in education.
(22:44):
And they are really good tools and each chatbot is a little different for a reason. Gemini is not bad now that they've kind of corrected some of the bias stuff. Gemini is pretty rock solid. The free version is about 50 to 75% as good as ChatGPT but it's free. Use it all day. And they're about to roll it down to 13 years and up and open it for teens specifically in Google for education, Google workspace. So you could deploy it to your students if you wanted to. So that's what I mean. The Coca-Cola of AI means everyone knows it, it's synonymous, it's a household word, but there are other soft drinks out there like Pepsi, Mountain Dew, Dr. Pepper and so on, and that's only talking about the soda. So if you equate sodas to chatbots, you can then say, oh, what about other snack foods like crackers and stuff? Well now we're talking about things like imagery - image generators. Oh, and did you know there's a whole other category of AI, and there's another category of video AI, and another category and so on and so forth in the giant food category that is AI.
David (23:54):
That feeds back to, you mentioned earlier about this exponential non-additive view of the improvements that are happening particularly daily with AI. It's not just language plus vision plus robotics for all the combinations possible, and it's accelerated. So people who don't even know what code means or don't know any coding language don't have to learn code at all, and they can be told how to if they want to and they can have code written for them and they can then dissect it. So what does this mean with this exponential versus additive aspect of AI? What does this mean for teachers and students. and schools in general?
Daniel (24:39):
Well, it means that we have no way of predicting what's coming. Not really, not the average person. If you're nerding out like I am and you're keeping your finger on the pulse, then yeah, you might say, yep, 2025 is going to be all about agents and about physical intelligence. But most people don't know that. Most people are unaware but don't have the full-time job keeping up with this. So the big takeaway there is that you'll be folly to come up with some precise lingo, precise policies around AI because it's going to change so fast that by the time it's printed it may have aged like milk. So instead you approach it with general strategies. That's why I don't have a lot of specific prompts. I look more at general concepts with prompting
David (25:26):
The two rules, right.
Daniel (25:27):
Yeah, those will age well. Thinking about genies instead of menus will age well. So this is the kind of stuff I'm saying or the importance of curiosity versus apathy – that will age well, as opposed to this tool or this tool or this prompt or this special hack you can do. I'm not really as interested in that. So the exponential thing though, I think it's just super important for the people to really understand that you can talk to AI now in English or any other language up to 50 languages and it's so different than what we had to do yesterday. So that single addition of large language comprehension or that language comprehension with this machine learning, you just had one additional modality and now you can do so much. And now AI is in the hands of everybody. No matter if you're a technophile or a technophobe. Your 87-year-old grandmother can use AI and use it well, if she can speak a language.
David (26:31):
It's amazing the reach. There is an app on my phone called Be My Eyes and it's for visually impaired and there's a giant network of people and when someone who has issues with a blind or visually impaired, they open the app and request help. Now, however, the visual AI is able to do what the person on the other end could do and sometimes far more quickly and efficiently.
Daniel (27:03):
Yeah, so piggybacking on that, this is what I mean. You have machine learning, then you add in the component of language understanding and suddenly it does all this stuff. Then you add in a third component of image analysis and recognition. Now you can upload images to it, flowcharts, graphics, text documents, it'll convert it to text. “What is this bug? What am I looking at here? Is this an infection? Is this flowchart good?” All this stuff it can now do. And not only that, but it can now analyze friend and foe - enemy recognition - patterns. It can look at skull fractures and say, oh, you got a thing here. It can detect cancer. All this stuff. Once it starts having the image ability and it can now see and talk and reason, well now it just needs a body. And that's the fourth component of robotics, and now it can move about in physical space.
(28:01):
But let's step back a little bit before we get to the robotics thing, giving it that image recognition. Your listeners may or may not be familiar with Neuralink where it's a brain interface chip hardwired into your actual brain and allows you by thought alone to drive a computer. You can move the mouse, you can control clicks, you can open screens and do all that with mind alone. In early testing, they had chimps playing games pong with their minds alone. And now they've done successful implants on humans. They've had FDA approval a couple of years back and now they're successfully implemented it. And there's videos online. You can see this guy and he's over there playing Mario Kart and chess and driving this computer. Well, the next thing that they just got FDA approval for a couple of months back is Blindsight made by Neuralink.
(28:54):
And as long as your visual cortex is intact, meaning you could have been born blind, you will be able to see because they will be able to interface a camera with your visual cortex and send that data to your brain and you will then see now, right now they say it's Atari level graphics, but it's going to get better and better and they envision one day humans being able to have blind people - either born blind or caused to be blind later - will be able to have infrared vision if they so choose because the camera will support it and it's just a matter of hot swapping cameras.
David (29:35):
Let's talk about something else that's really interesting that might surprise some people. And at first it might sound like it's partisan, but it's really not. It doesn't matter if you're left or right leaning. If you're conservative, liberal or if you vote for a candidate who's Republican, Democrat, Green Party or whatever. Everybody's got a vested interest in using an AND approach versus an ONLY approach to sectors related to energy and the environment. So this link between fossil fuels, AI, and concerns about the planet are right in front of everybody all the time and real progress is only going to be made if all perspectives and ideas are valued. So run with the idea of how each faction or each group really needs to understand that there's got to be some give and take here in order for everybody to achieve what their goals are or their goals are not.
Daniel (30:36):
I think we need to just for the listeners, explain the basic here why we're even talking about this. What's up with all this energy thing, right? So we got into the succession a little bit because I learned as I was doing some of this research that fracking, hydraulic fracking, had a significant impact on energy markets, which indirectly contributed to the environment in which AI and large language models have flourished. Fracking dramatically increased the supply of natural gas and oil in the US and it made us less dependent on foreign energy sources by the mid, I'd say I think 2010s roughly mid 2010. So this made us one of the largest energy producers. Well that drop in energy price really helped increase the compute power of our AI systems. We were able to then connect enormous data centers and scale them in a way that we had never thought to do because it was just too expensive and would cost too much power to run.
(31:40):
And it worked. To their surprise, the scalability thing actually worked and they said, “Oh, what if we hooked up entire 10 city blocks as thousands and thousands of computers all thinking as a single brain” and it worked. We just had to have the power. So data centers exploded, and that affordable energy led to the rapid expansion of cloud computing and data center infrastructure and that was foundational to AI development. And so that then gave us bigger brains, which allowed us to throw more data at it and to have more complex algorithms to teach it with. So there's like three things that really control how much you get out of AI, and humans by the way. One is your data set, that's your prior knowledge and experience and your education as a human, and it's the data you're able to throw at a computer. The second thing is your compute power. And for a computer that's just sheer processing power and you get that by scaling more and more computers and having the power to run them. With a human, you get that with IQ or general intelligence and effort.
(32:54):
The computers beat us at that all day. They give a hundred percent all the time and sometimes 110% if you ever clock them. (The tech guys will know what I mean by that.) The third thing that you get is your algorithms or your training. You can have a smart person with a big encyclopedia set and a poor teacher and they're not going to learn as much. And that is similar to teaching with the formative assessment. You go back and you tweak your instruction, you change it based on this, and feedback and so on. Here's the problem. If you are trying to teach technology or physics to your cat, it will never understand it. It's limited in its intelligence. You try to teach it to a third grader and you can get some degree of success, but never the degree you need because of intelligence. So when we finally broke that compute barrier, because energy became abundant and cheap for a while, and we then also learned about scaling and we then said, “Oh, we have such a smarter computer set here with a lot more compute.”
(34:03):
This is 160 IQ thing, not a 70 IQ. Let's throw a lot more information. So they threw the internet at it and they used large language algorithms, a different way of teaching it. That was less direct instruction and more constructivist approach, so to speak, to use educational terms. You go out there and you learn and construct your own knowledge by trial and error and stuff, and it worked. And so that's why we're where we're at with this. That's a lot to say, but it's important to understand because what happened is in this last election, I found it interesting that both candidates supported fracking. One candidate, Kamala Harris, used to be very much against fracking and was on video saying such. That's not a criticism against her. People are allowed to change their mind. I found it interesting that she did so because that seemed to be a point that a lot of the left would generally not support at all, but she did.
(35:01):
And she was coincidentally - or maybe not - put in charge of handling the US's response with AI. She got a lot of people together, a lot of big brains with AI in the room, and they approached this so she was aware, or must have been aware, at the importance of energy costs and fracking in general. And also knowing that most of your AI experts are saying we need to double our energy output in the next five to ten years to be competitive with Russia and China in the AI race. Now, this race is not just academic, this is military, this is environmental, this is industrial, this is everything, because AI is just more IQ. So with 20 to 30 to 100 points higher IQ being put to challenges we face today such as climate change or nuclear proliferation or energy or whatever, renewables or cold fusion or Mars, [or] cancer, there's so many problems we could solve if we were just smarter than Einstein consistently. And it may be that we have to burn our candle brighter now so that we find solutions, renewable energy or nuclear energy solutions in the future. And we also are in a situation where we're kind of on a runaway train here and if we do not continue this, we're going to be outpaced by other countries and it could happen in military ways. I know that's a very long explanation for all that, but
David (36:46):
No, it's just really fascinating that if you want to address climate change for example, you must simultaneously embrace the use of greater energy consumption in order to keep the AI working to generate a solution that you can then back off of traditional energy sources.
Daniel (37:13):
Well, here's another thing to consider. Economic growth is going to be critical to developing some of these new technologies. You can't develop Tesla without a certain amount of venture capital. You can't go into these areas and expect this to work without some sort of government funding in order to make these competitive on the market. How do you get that? How do you get that economic growth? Well, you don't get it by curtailing industries so much due to carbon restrictions or some of these other things that they start losing money and especially when you're competing with other nations who are more than willing to pollute any way they can in order to win. And so yeah, it's a Faustian deal in some senses, but it's also one of those things like you really can't afford to not win that race. So from a balanced perspective, I personally want to see us get off of fossil fuels and oil, but I understand that you cannot just say, “Well, we just had to ban all of it and just go fully green and so on because we just won't achieve what we need to achieve without using it now,” at least from what I can see, it may be that we have to take another harder look at nuclear because it does provide pretty stable energy and pretty massive amounts of that as well.
(38:28):
But they take a long time to develop. They may not be the political will for it either.
David (38:32):
It's just very interesting that these are so connected, these issues, climate change and energy and AI that probably aren't thought about that way very often by many people. But that takes us back really almost full circle to the human element, and that takes us straight back to teachers,
(38:57):
Which is where this whole thing is headed. It's because of the incredible impact that educators can have with individual students, with a whole cohort of learners like a class or even generations of young people. So thoughtful wise, teacher-leaders sometimes prevent or mitigate problems and they can absolutely bend thinking, for lack of a better phrase. So I would like you to think back on your learning path, including the skipping school days, the stuff before, the stuff after it, and where you are now, and tell us about one or two of your favorite teachers and mention why that favorite teacher moniker -really that honor - fits those people.
Daniel (39:46):
I will shout out Donald Dowdy. He was my band director in high school and he brought a certain energy and smile and understanding to every kid that was in the class. He brought humor. I mean, we're talking bad, bad dad jokes. And we were on the field and he was yucking it up and he just said, you know, stupid jokes. Like, how do you tune a fish? Have him play scales. And then we had to play the scales and stuff and everyone's groaning on the field still, things like that. He was unabashedly dorky and it was great, and everyone loved him for it and he had a lot of words of wisdom as well. It wasn't just about band. And so he acted as a bit of a father figure to a lot of people there. So that was that emotional, personal relationship that he brought. Band was just a secondary thing.
(40:36):
The other one I would shout out is Bruce Little. He was my art education practicum instructor when I was doing my student teaching and all that too. He was an art teacher at George Southern University and did the art education program. And he, man, he was brutal. He had us writing scripted lesson plans when other people had said, “Okay, you've done enough.” And he was still doing them. And we pushed back and he says, it's not that you'll ever write these again. He was honest with us and he said, but I need to know what your thinking is. I need to know how you think, what you anticipate. What in your mind should this lesson go as? And so you're doing this for me, not for you. And that kind of honesty was a big deal for me. I was like, oh, okay. At least I understand.
(41:21):
I explaining what the work was for and just the incredible rigor. Several people cried. He was tough. He expected high order thinking. He didn't want popsicle sticks and milk jugs and stuff like that for art, even if we were doing elementary. He believed that just because you taught elementary kids doesn't mean you have to act and be like an elementary kid. You don't have to reduce your standards or rigor. I think he said something about pediatricians don't act like kids, even though they work with them, they're still serious in what they do. And now you act like the kid while you're with them. Yes. But when you're away from them and you're talking about your craft, you take it seriously. And I do remember finishing my unit for my student teaching, and he was stoic as ever. And I go outside and I'm like, “Well, what do you think?
(42:10):
Was it good?” I was really nervous. And oh man, he just held his hand out and said, “I want to shake your hand. That was one of the best units I've ever seen.” He shook one other person's hand that semester that I know of. And it was just such an honor. He taught me that rigor doesn't have to be authoritarian, doesn't have to be crushing. He had a way of demanding excellence but also honoring you and respecting you and respecting the craft of teaching. And yet, even if he disagreed, because I was going through art and I believe that illustration was fine art, and he did not, and he didn't think illustration could be seen as fine art. And so I did a lesson on an art-tique and we ran through the Feldman Art-tique Method on an illustration, not just a fine art. And I changed his mind and he allowed for that. And that was another thing. He was willing to adjust. He was willing to change his mind even though he was so strict. So yeah, those people, they brought a certain level of rigor and professionalism and in their own way. And all of those were based on relationships. You cannot replace those with AI. You just can't.
Speaker 3 (43:29):
A fantastic closing point. We could probably talk for another two hours and made a third and forth episode, but we're going to cap the dialogue for now. Thank you very, very much, and have a great day, Daniel.
Speaker 4 (43:43):
Okay. Thank you so much. Bye bye.
Speaker 4 (43:47):
Thanks for listening today. Find the Lead. Learn. Change. podcast on your search engine, iTunes or other listening app. Leave a rating, write a review, subscribe and share with others. In the meantime, go lead. Go learn. Go make a change. Go!