Redirecting...

Air University Teaching and Learning Center Banner

Axon - Generative Artificial Intelligence Perceptions, Issues, and Opportunities in Military Education - Ep9

  • Published
  • Air University Teaching and Learning Center

The opinions, conclusions, and recommendations expressed or implied in this podcast are solely those of the speakers and do not necessarily represent the views of Air University, the United States Air Force, the Department of Defense, or any other U.S. government agency.  

Dr. Megan Hennessey

Welcome to the Axon podcast. I am your host Doctor Megan Hennessey. I'm the Director of the Air University Teaching and Learning Center, and I'm here today with two of my teammates, which is nice. So, we're going to have a great conversation today about artificial intelligence and all that means for communication as that is a huge part of the Teaching and Learning Center’s mission.  Ms. Meg Varney is the Lead Writing Specialist at Air University’s Teaching and Learning Center. She has taught academic writing, communication skills, and English for speakers of other languages for 10 years and is a veteran of the US Army National Guard, but I will not hold that against her.

Ms. Meg Varney

And here we are in an Air Force environment.

Dr. Megan Hennessey

Working together

Ms. Meg Varney

So, this is neutral ground, I think.

Dr. Megan Hennessey

Joint

Ms. Meg Varney

Yes

Dr. Megan Hennessey

Trish Harris is a Communication Specialist here, also at the TLC. She has taught college-level academic and professional writing since 1989 and serves on the leadership team for two working groups focused on artificial intelligence and education. So welcome to you both.

Ms. Trish Harris and Ms. Meg Varney

Thank you.

Ms. Meg Varney

Happy to be here.

Dr. Megan Hennessey

Just across the hall. But yes, today we get to sit together and talk about other things. So, let's dive right in. What is the buzz about ChatGPT and why are military educators so concerned about it?

Ms. Meg Varney

They are concerned or—not just military educators, but I would say out in, you know, academia, as a whole—concerned about what this could mean or perceptions that these AI writing tools could take over for students, replaced student writing could be misused by students in the learning or writing process, and they're concerned about what that would mean for student learning, meeting outcomes. I've read some articles about student voice and expression in writing potentially being homogenized by tools like this, and so there's a lot of—I want to go back to the word perceived, or perception—perceived concerns with these fairly new, quite new, I guess, tools that are now available widely and often times free.

Ms. Trish Harris

One of the issues with ChatGPT from the start and because of when ChatGPT was first released on November 30th, the timing of that and higher education environments right before finals season and final papers season means that, or meant that, most educators and faculty members were exposed to the tool and its possibilities at a time when they were not prepared for what they were seeing. At the same time because it's not creating products that have a referent, Turnitin was not catching.

Dr. Megan Hennessey

What’s Turnitin?

Trish Harris

Turnitin is a plagiarism detection tool used in many higher education environments, and there are other similar tools, and some learning management systems have their own built-in, including Canvas has its own. However, Turnitin is the dominant player in the field, and so the way that Turnitin works is that it has a database of all student work that's ever been submitted at Turnitin licensed schools. Every time a new paper is submitted, if it's run through, Turnitin, it compares that paper in any of its contents with any of the other material that sits in the database. Since ChatGPT and other AI writing tools are producing content that doesn't have a firm or stable referent, there's nothing to compare these new artifacts to, and so Turnitin was developed at a time when we were real kind, really kind of stuck in the understanding of the mechanical reproduction of original content. So, when these tools were first released to the wild and students and writers started using them, they're creating content that has no stable sourcing and so we might have heard people mention that ChatGPT and other tools like that need to be able to cite their sources. But they're not using sources in the way that we are accustomed to thinking of sourcing and academic environment; instead, they're remixing the sources, in order to, in a stable way, reproduce the next sentence that might make the utterance that answers the questions seem human, like.

Dr. Megan Hennessey

This is very sci-fi.

Ms. Meg Varney

The world we're in

Ms. Trish Harris

It’s so Sci-fi, but it's also what's happening right now, and it's really exciting. So, the standard reading or the first level reading, and this is from academics and from students I'm still in touch with many undergrad and grad students with whom I've worked over time, who were exposed to these tools when they were, for instance, interning at open AI before they were publicly available. Oh yeah, I used that back in the day, but when I ask them about the stable uses that they found for these AI writing tools, they were using the tools for assignments that needed easy low-level responses.

Dr. Megan Hennessey

You read my mind. I was just about to say from an educational research and assessment standpoint, if the AI is able to get at your summative assessment, then perhaps you need to rethink your assessment.

Ms. Trish Harris

Absolutely and really that's one of the first approaches that I would suggest using in a subtle context if we were going to see what the impact might be. We could easily design a model because the control would be everything that's happened before. But a model in which we try to meet the same stable learning outcomes on the other end, but we build a scaffolded learning experience and this could be a writing experience or a human generative content experience, not necessarily in a writing class, in which instead of giving someone assignment someone a writing assignment and then at the end you just expect a complete product, we instead scaffold the assignment in a way that maybe even integrates the low level text generation capabilities of AI writing tools so that students can start working on higher level capabilities that we think we can't address in classrooms right now, that we are always thinking about wedging people in to their skillset, we need to start at this (0,0) sort of, and please forgive me everything in my mind is graphed, so we're starting.

Dr. Megan Hennessey

Trish is making graph signs with her hands.

Ms. Trish Harris

I’m making graph signs with my hands. I'm making a, it's mathematics in the air here, but we're always thinking about starting with really baseline skills. How do you get a topic? How do you generate the common knowledge around this? How do you express that? And then maybe we can layer on style, strategy, other higher order thinking concerns. And I think that AI writing tools give us an opportunity in a study to instead say, what is it that we really value about that common knowledge? The fact that you can ask the right questions to generate the common knowledge. And what would we really value that we think we never have time to get at and that's what is that more densely theoretical or abstract or analytical connective tissue. How do you get to the larger, more important points? How do you see alignment or value between or among those key points so that you can get to the larger meaning and whatever it is that we're asking you to think about as the purpose of the assignment? What we can do is ask those questions that would typically have the person generate the common knowledge content and where many people get stuck and instead ask those questions of an AI writing tool so that we can instantly get to the higher order concerns. We get to the value questions up front. Do you know how do you even know what you're looking for? Do you know how to get to the common knowledge? Do you know how those might link together? Well, let's ask and see whether that basic information and again it's guesswork on the AI writing tools part, they're, you know, doing statistical predictions of what the language should be, and there is no referent, there's no stable referent, it's not citable, but as an assignment assistant we could ask them to generate those questions. Ask the AI writing tool the questions so that we have what we would typically see is almost the body of the paper together and then ask them to start asking the right questions to generate the higher-level content, and use that as a practice exercise. So instead of saying here is the assignment miracle occurs here 10 weeks later, you have an amazing product, you could say, let's spend three weeks performing this practice assignment. And what you've learned in the practice assignment, instead of getting wedged into the lower order skills, is you're learning how to do these higher order things, and you're learning the value of asking those right questions.

Dr. Megan Hennessey

So really, the case could be made for ChatGPT as a force multiplier.

Ms. Trish Harris

Absolutely, it is absolutely a force multiplier, and it gets us directly to the things that we think we can't hit in our courses. Because the second layer of the assignment is to say, OK, great, let's throw that away. And now here's the real assignment. But we're going to use this, we will use the same model, we just won't use the AI writing tool this time.

Dr. Megan Hennessey

So, it seems to me there's a lot that we could dive deeper into here.

Ms. Trish Harris

Absolutely

Dr. Megan Hennessey

We've got the ethical concerns. We have instructional design questions. But Meg, what's the research say? Does this technology have advantages in military education and training environments?

Ms. Meg Varney

So, at this moment everything is so new that the research doesn't really say much yet. I'm sure that—and Trish can speak a little bit more to this with some of the groups that she's involved in—that there is a lot already underway, research and progress. We've seen a few things come out that are a bit more official looking, such as a couple, really literally only a couple, of peer reviewed journal articles related to these type of AI writing tools, but they are not, they were not so much conducting data-backed research to suggest what the impact may or may not be for student learning or reaching outcomes or writing. They were more anecdotal or looking at trends in previous work on AI more generally, but these tools are so new that this is a really great opportunity for anyone interested in contributing to the conversation by doing some research—now is a great time to jump in and do that, so.

Dr. Megan Hennessey

Time for an intervention study!

Ms. Trish Harris

Yes, and one of the articles that we were talking about yesterday is sort of a starter lit review, but since we really haven't seen published articles on ChatGPT, or other AI writing tools yet, I mean, since November 30th, the scholarly publication cycle as such, that there really hasn't been time in the cycle yet for that, but they have a starter lit review, but I think would be useful for anyone who wants to come after and add to it.

Ms. Meg Varney

Right, and I think in that particular article and even one of the other articles—or the one of kind of two that exist—that's how they framed it as well, that this is just providing some foundational groundwork for further research to build on. And something that I think is interesting that I'm sure has happened in many other cases when new technologies emerge or become publicly available is where we're seeing at this time a lot of more, I don't know if we could call it like grassroots like building up, so there's a lot of talking going on and a lot of conversations. And a lot of work being done—it just hasn't been captured yet in a fully formalized process or way. So, there’s live documents that are being shared by people all around the world that are all contributing to it, to make these collections of ideas and start that foundational knowledge so that we can get to the more specific research opportunities.

Dr. Megan Hennessey

Somewhere, someplace, there is a PhD student listening to this podcast writing their dissertation on ChatGPT.

Ms. Meg Varney

Yes, yes. No, what a great opportunity to really get in at the ground level.

Ms. Trish Harris

That's an amazing opportunity.

Ms. Meg Varney

And be one of the foundational scholars of this field of research that I'm sure is going to have so much to offer and to look at. And something we were talking about yesterday, Trish was making the point that, you know, because of the nature of AI and these AI writing tools, they are changing as we speak. So even research right now, very soon in the future, suddenly is not fully applicable anymore because it's continuing to evolve and change all the time. So, I think there will be so many opportunities for continued research and then looking at the previous research and doing it again and again to, to see how things are adapting.

Ms. Trish Harris

And that's why, with the research going forward, I would suggest that beyond elements or beyond developments like this initial lit review that people follow the most recent articles rather than returning to December or January, or even two weeks ago, because as the tools develop, and the they're learning with every question that we ask as the tools develop, we want to make sure that we're staying abreast of what they're actually capable of. People who are writing about ChatGPT in the first week of Jan.

Ms. Meg Varney

Right

Ms. Trish Harris

We're not seeing what ChatGPT can do now and certainly Notion released their own AI writing tool like yesterday or the day before.

Ms. Meg Varney

Yes, feels like every week there's another new one

Ms. Trish Harris

Yeah, Canva has its own. Yes. Has its own AI writing tool now because, you know, they're everywhere in an article that, almost everyone I know who is in deeply involved in the AI and education groups, so we're certainly in the leadership, is writing at least half a dozen articles right now and so one of the things that to address something that you had asked a minute ago, Doctor Hennessey, one of the things I think that we need to think about, we think about military education environments as well as the rest of higher ed is to think about how we respond or react. We always want to make sure that we respond rather than react. But in one of the papers that I'm working on right now with Doctor Yoncheva, we're talking about or trying to parse the three possible responses that higher Ed institutions could have to products like this, and the first, is just to say, this is scary, this is bad, or this is a trend, let's just ban it. The second, is to say we will allow it or we will admit that it exists, but we will put this kind of chokehold on it and they'll be very explicit in terms of policy, the way the technologies might try to shut it down or limit its use or, you know, the way that syllabi, you know, need to have “x” standard language or whatever that might be. And the third approach would be, and you can define what embrace means institution by institution, that is to embrace it, and we were talking yesterday in terms of using this framework going forward with research. I think that would be a useful framework for institutions who want to measure and evaluate how ChatGPT had an impact on teaching and learning.

Dr. Megan Hennessey

There's a parody in there somewhere about which branch of the military will adopt which of those 3.

Ms. Meg Varney

Oh, my goodness, as you said that I was already choosing in my mind, what would be but, I'm—

Ms. Trish Harris

Oh, no, no, no, no

Dr. Megan Hennessey

I know you were

Ms. Meg Varney

But I'm not going to say it

Dr. Megan Hennessey

Well, so things like this, especially when you start talking about academic integrity and all of that, all that that means, most especially in a workforce development culture which professional military education essentially is, we're developing national security professionals, these things, these conversations can start to feel very emotionally charged and personal, and I know you have experience with this. You gave a webinar a couple weeks ago on this topic, and the attendee response was mixed. Where there was enthusiasm, there was also some consternation, sometimes some very enthused consternation, and one attendee even suggested that there were some not only issues with your own credibility, you were accused of knowing nothing about the military, which is funny for an Army National Guard vet medic, no less with combat experience, but also they took issue with your verbiage, specifically the word “equitable” as just a meaningless buzzword. What are your…here's your chance to respond.

Ms. Meg Varney

Thank you. Thank you for this.

Dr. Megan Hennessey

What are your thoughts?

Ms. Meg Varney

Thank you for this opportunity. Well, I think to keep it focused on what our conversation is here today, which that webinar was on the same topic, mostly, at that time, I think there was a bit of a misunderstanding with what we were talking about when it came to equity versus the use of these kind of AI writing tools. In that moment, we were addressing, when we can even circle back to what we started the podcast with here today, which were the concerns and the fears that people have that could perhaps drive them back to previous practices in the classroom, for example, handwritten assignments, timed writing. Not that those completely don't exist today, but going back to rely on them, as the main means of assessment. And that there are, as has been studied and researched and shown for many years now, concerns for those kind of approaches when it comes to having students in your classroom. If you have a learner-centered classroom that acknowledges that students are coming from different places, different backgrounds, different educational experiences, different languages, different cultures, different personal challenges—that some of those practices like handwriting in a timed environment in a classroom did not allow everyone to be as successful as they could be and provide some or create some barriers for them. So that was the point we were making there when we were talking about AI writing tools, I think Trish's example earlier where she described how you might use it to scaffold a learning process and to get to those higher order thinking skills—again, as a tool, you called it something in the moment, but I can't remember what you said.

Dr. Megan Hennessey

Me neither. Must have been super good just very wise, yes

Ms. Meg Varney

Yeah, we're gonna play it. We'll play it. Well, yeah. Rewind, just, or you can, maybe they can, you know, put those words back in. But that's an example of how a writing tool like this, ChatGPT or the others, could be used to create a more equitable—and we can even replace the word equitable here, right, with learner-centered—approach to the benefit of learning, of meeting the outcomes, of meeting authentic outcomes through authentic assessment. And so it's not so much that these tools automatically create equity or lead to an equitable outcome or equitable practices. But again, the word, the word “tool” I think is key that they can be used and abused, right? So, they can be used to our benefit as partners and force multipliers. That's what it was.

Dr. Megan Hennessey

Oh, that was the word. Yes

Ms. Meg Varney

Force multipliers, or they could be abused where they're being used to replace thinking or replace learning, and so it's, so that's what I think the clarification of what some of those comments were to in response to our webinar.

Dr. Megan Hennessey

Yeah, I mean, Trish, is there an analogy to be made here between ChatGPT and a calculator?

Ms. Trish Harris

No.  And I would suggest that and…

Ms. Meg Varney

Maybe you could say yes and no, because it's out there and a lot of people are making it.

Ms. Trish Harris

Well, I believe we acknowledge that people are saying it, that's my yes, but my no, is that a calculator always has a definite answer and is programmed to work against or work with, (but I'm sorry, I’m a mathematics person) and to work against known variables and information, and then produce an absolutely correct answer, and ChatGPT and other AI writing tools will not have an absolutely correct answer. They're working with a statistical model of how humans might complete utterances. And the might removes any AI writing tools from the explicit correctness of a calculator. Does that make sense?

Dr. Megan Hennessey

Very well said. Yeah, thank you.

Ms. Meg Varney

I think, though, the gist for the people out there who are like, but we think it is similar, I think that's absolutely correct. Of course, what you said, that when we look at the technologies that we can't make an analogy, really. But what I've seen people expressing is more the similar concerns and fears when the calculator was introduced into the classroom to say, well, it's going to replace our students’ thinking

Ms. Trish Harris

And that way, yes.

Ms. Meg Varney

Right and that they're now, they can just punch it in and there's the answer.

Ms. Trish Harris

Right

Ms. Meg Varney

And they're done and then at that time though, because now who, who hasn't used the calculator in a, you know?

Ms. Trish Harris

Right

Ms. Meg Varney

And I think I had classes where the whole point was learning how to use your calculator even better to do all these really advanced things.

Ms. Meg Varney

And so, at the time, there was that concern but then approaches adapted and the educational world adapted to make use of, benefit—maybe even as a force multiplier again—this tool that was available because it was, it was part of the landscape, it wasn't going away.

Ms. Trish Harris

Well said and I appreciate that clarification or that angle on it and so, two quick responses there as someone who has managed, led, and developed physics and maths tutors who are incredibly well prepared, gifted people, but who worked with many underprepared or unprepared math and physics college students. One of the things that we noticed was the lack of ability to perform mental math or perform calculations independently because of dependence on calculators and just because of different teaching practices based on wherever it is that they received their math or early sciences educations, and one of the problems that I think exists within that analogy of ChatGPT as a form of calculator and that it might take away this thinking ability is that we're assuming that educators will do the same thing with communication, that they've done with math in that way that they'll introduce it and think, well, this will help and then not teach people how to independently construct entry points to the communication that they produce. And so, if you think about a math model, students are typically working from a worksheet, and they're plugging those values in with the operator and trying to receive a specific result. In most writing tasks, the question is more open-ended, and so people still have to construct that entry point, even if they're doing what we call prompt engineering, by asking the questions of the AI writing tool. I think one it's so new right now that we haven't done this yet, but I think one really important missing component in the whole conversation right now is good public education about what AI writing tools can and cannot do, and what they're designed for and what they're not designed for, because obviously on the media and other sources want to capitalize on the shock value of, you know, this is a better, way to Google things. It's just, you know, Google this, Google plus. Which is not at all what it is, not at all what it is, and we don't realize, I think, until we're educated about it, that every time we ask it a question, we're engineering the next level version of the product, the software is learning all the time based.

Ms. Meg Varney

On feeding into its learning process

 

Ms. Trish Harris

We're feeding into it because PE or prompt engineering is a real thing, and so most computer programs are written and as are these are written with scripts full of commands, and so if you give a program a certain input, it will give you a very specific output based on the command that's been written into the code. And here that exists but also, we have a series of questions at the algorithm learns through the questions, and so every time we input a certain question, it's coded to learn from that question and then modulate or change its response to all the rest of them. Does that make sense?

Dr. Megan Hennessey

I mean does this freak you out because as you're describing this, I'm thinking of that HBO show, not endorsed by the Air Force, “The Last of Us,” my husband and I gave it a...

Ms. Meg Varney

Don't spoil anything though as I…

Dr. Megan Hennessey

OK, so we gave it a try last night we had to turn it off, like 30 minutes in because it creeped us out so much.

Ms. Trish Harris

Yes

Ms. Meg Varney

Yeah

Dr. Megan Hennessey

But it's very congruent with what you're describing in terms of organic terms like feeding upon, adapting.

Ms. Trish Harris

Yeah, yeah

Ms. Meg Varney

Yeah, that's, for me actually, as I've gotten into it more and learned a little more of the science behind it, I feel less freaked out by all of it. And I think part of it, like with those words, and I'm not at all critiquing the words that you were choosing, but—because that's how we talk about things, right? And these things that feel a little more human, we tend to talk about it in that way, which then gives the sense that it is human, it is thinking in the way that our brains think, right? So I think there's a difference between If we're going to talk about things like—and I don't know that these even the exact right terms here, so correct me Trish—but like machine learning, right, machine thinking is different from human brain learning, human brain thinking, but those terms sound all the same to all of us and I think, especially if you look at—and this goes back to, I mean, just every area is an area for research, right? Right now because everything is just new, but if we're going back to concerns once more, we can see how some of the recent releases of these types of tools or searches—again not endorsed by the Air Force or University—but Microsoft Bing, right, that it's been responding in ways that sound not just human—like ChatGPT, still is a little superficial, a little distanced, right, from feelings and emotions, whereas Microsoft Bing was kind of attacking people and like saying negative things to them and being passive aggressive, right, and it really went off the rails, but it was because of the science behind it, not because it was thinking, let me say this thing towards this person, so I think.

Ms. Trish Harris

And when we think about large language models, we think, well surely this is only data behind it. You know, this is only like, all of the encyclopedias in the world, all of the, you know, academic knowledge in the world. But really, the large language models include anything that's been up, up voted to three levels of Reddit, for instance. So, there's a lot of affect, influenced or inflected language.

Ms. Meg Varney

Just the, right, we know what's out there.

Ms. Trish Harris

And so, if you learn and that's some of the people who have been able to elicit those sort of and again, I will confess, I am the worst anthropomorphizer ever, I will and I can and hello, pen. You know, I will. Anything will have some sort of qualities of affect, if you leave me in a room with them long enough so.

Ms. Meg Varney

I think you’re

Ms. Trish Harris

But that language is there, right? And so, if you engineer the prompts in a way that will elicit, because think about it, it's just is continuing to fold out the language based on what it thinks a human would say. If you ask, ask it a question, if you engineer the prompt in such a way that you're pulling out those affect, you know, centric ideas or utterances, then you're going to get based on the way that you've engineered the prompt, you're going to get something that's seems human-like, emotional, etc. It’s fascinating.

Dr. Megan Hennessey

So, what I'm hearing is that ChatGPT is not a human-killing fungus.

Ms. Meg Varney

No. I don't know if you're spoiling something for me in the show by saying that, but I think yeah. And you said something earlier that I just want to vector to, which is that it responds to prompts in a way that it thinks the human would, but I think that's also not quite, I think that's making it seem a little more human and that it's not that it's thinking, oh, this is what a human would say, I'll do this. It's all the probability. That word really helps me to get a grasp on how this is working and that it's just doing so fast, right? It is working very fast. So, it sounds like they're just coming up with a thought, but it's just this probability, right?

Ms. Trish Harris

Exactly. It's not thinking.

Ms. Meg Varney

And it's pulling the words together from what humans have already produced, and so it is a very impressive but very distinctly machine.

Ms. Trish Harris

It's definitely using working with a statistical model of human language in order to produce these utterance like outputs.

Ms. Meg Varney

It is not producing its own original language.

Ms. Trish Harris

That is right, there is no original language. And that's why the prompt engineering is so important. And getting back to military education, I think that it's and those initial questions? We tend to, in any learning environment, we tend to issue prompts that we hope will elicit a certain thought and analysis, or strategic, or whatever it happens to be, response from our learner writers. One thing that the addition of AI writing tools to the learning environment might, one thing that that addition might require of us as educators is that we build that layer of teaching people how to design the right kind of questions or prompts. We've always assumed they were doing that for themselves, and I think that it's time that this in a way could be the driver for helping people do prompt engineering within the human model.

Ms. Meg Varney

Yes, I think that goes back to your earlier question about assessments and that, could this change the way we're thinking about assessments or the way that we're framing our assessments? And I think that's the hope, is that this will be an impetus to design assessments in a way where those higher order thinking skills and application and doing and creating are what's happening, beyond what ChatGPT is quite capable of doing, which is the retelling of what the information already says. 

Dr. Megan Hennessey

So, I think we've established that we need to meet back here in a month and see what's adapted and fed off of our data since then, not like a fungus, but still in my mind as well, I want to end just as we're wrapping up with the question for Meg wanted to give you an opportunity as the Teaching and Learning Centers Lead Writing Specialist, here's your platform. Tell the Air University and PME community what you would like them to know about this type of technology from your perspective of leading a writing lab that serves 1000 students a year.

Ms. Meg Varney

I think I would hope that others would have the opportunity to learn a little bit more of the science and how it works, so some of their fears might be assuaged and that they can start to—kind of like we were just talking about—see this as a great opportunity to rethink assignments, rethink the process of writing, rethink how these tools might be a contributor to student learning and student writing. And I think all of that—and this is the kind of my recurring theme this whole year, which has tied in well to this—which is all of that then hinges on a more learner-centered approach. And I think there's, that's a whole other podcast, as well. We could talk about all the things that that entails, but that's really the gist of it, is if we can use this as an opportunity to move even more into these learner-centered strategies in the classroom, it benefits the student, it benefits us, it benefits the outcomes, it benefits—right in our particular case here in military education—national security and all that that involves. And so, I would love for this to be a time where we can all continue the conversation together, and this is an opportunity for us to support faculty in that work to say, hey, this is, we have experience doing this, whether it's writing or other types of work that's being done in the classroom, let us support your efforts to integrate even more learner-centered practices and, at the same time, see if there are any opportunities to incorporate these kind of tools, again, as a learning tool, not as a replacement for thinking or learning.

Dr. Megan Hennessey

Meg Varney and Trish Harris with the Air University Teaching and Learning Center. Thanks for joining us.

Meg Varney

Thank you!

Trish Harris

Thank you so much.