The Executive's Guide to AI in Business
Part 1 of 4 in Promevo's AI Webinar Series
Join Brandon Carter, the Marketing Director at Promevo, and expert panelists John Pettit, Colin McCarthy, and Jay Feliciano as they dive deep into the transformative power of AI.
In this first episode of a four-part series, the discussion covers the implementation of AI within organizations, ethical considerations, security risks, and how to avoid common pitfalls. The team also addresses the necessity of having structured policies, the benefits of managed AI tools, and the potential future landscape of AI technology.
This interactive session includes real-world examples, expert advice, and practical steps to help businesses successfully integrate AI into their workflows. Don't miss the Q&A segment and valuable insights tailored for businesses of all sizes.
Timeline & Topics
00:00 Welcome and Introduction
00:15 Today's Topic: AI
01:12 Meet the Experts
03:44 Promevo's Role and Services
06:10 AI in Your Organization
06:46 BYOAI: Risks and Policies
10:11 AI Policies and Governance
15:49 AI in Education
17:32 AI Management Structure
24:44 Gamifying AI Adoption
24:51 Diverse Use Cases and Challenges
25:30 Cross-Departmental Collaboration
26:21 Importance of Clear Prompts
27:47 Driving AI Engagement
28:43 Support and Training
30:52 AI for Different Roles
35:03 Build vs. Buy AI Solutions
39:47 Ethical Considerations in AI
45:30 Future of AI in Organizations
This webinar is part of Promevo's four-part webinar series on AI in the workplace. Use the list below to browse the other sessions.
Transcript
Brandon Carter:
Good morning, good afternoon, and good evening to everyone out there. Really glad to see everybody. Thank you for joining us, those of you that are viewing it live, and those of you that are viewing this in the future.
Our topic today is AI. Everyone's favorite, right? The hottest conversation out there. It feels like it has been for a while. for a few months now, and we're reaching a certain degree of maturity. And so a lot of you here and a lot of the people watching , you're trying to figure this out.
How do you bring it into your organization? What's the right way to do this? How do you avoid the pitfalls, the security risks, ethics, things like that. So that's what we're going to dive into today. And we've got some presenters here that have been right in the middle of it.
So the way things are going to go today, we're going to, it's pretty straightforward. Like I just said, we're going to introduce everybody. Everyone's going to say, Hey, and then we're going to have a discussion.
We have a live question and answer sorted out. But I think we're probably going to do that like in the flow of things. This is meant to be more, again, interactive, not so formal. You don't feel like you have to save your questions for the end or anything. We want to hear from you live, so feel free to drop those in.
So my name is Brandon. I'm the Marketing Director at Promevo, and I'll be your host today. And these are our experts, and I'm going to have them introduce themselves. Starting with you, John.
John Pettit:
Hi everybody, John Pettit, CTO here from Promevo. Been in technology for more than 30 years at this point, and, I've built and deployed AI systems across automotive and financial spaces.
I'm really excited to talk about how we begin to deal with sort of the proliferation of AI we're seeing across the landscape.
Colin McCarthy:
Thanks, John. Colin McCarthy. I'm the Change Management Leader here at Promevo.
I was a long time Promevo customer. Promevo was my Google partner for 10 years. Recently joined the team.
Long time Google Workspace advocate and user. I've actually got a Google Doc that was last edited in 2006, which is probably just around or just before the time that Google bought Writely which was the company that originally was Google Docs.
But yeah, I've been loving all of the changes in technology and certainly this new industrial revolution of AI.
Jay Feliciano:
Hi, everyone. I'm Jay Feliciano, Engineering Manager over at Spotify for what we like to call the digital workspace. Been there for about eight years now.
And if anyone knows anything about the world that we live in, it's always changing, never stagnant. And now with the new onslaught of our AI overlords, we're super eager to adopt and willing to to see what AI can do for us. And I'm really eager to see what what AI could do for all of you.
Brandon Carter:
Love it. Thanks to all of you for joining in. Again, thanks to our viewers who are here live. I want to point out that, all three of these fellows that are here with us today presenting like they have been in the middle of it.
Jay has led a number of AI initiatives in Spotify.
Colin is really our leader at Promevo. When a company comes in and works with us, Colin is the one that works directly with the company, training the leaders, training the employees.
And of course, John, as our central our primary technology guru has helped lead the charge for Promevo to get involved here.
A lot of good experience here people that have been in the middle of what you're wondering about and how do I do this? What's the right way to do this? These guys have been through and they've got a lot of answers and we're going to get into a whole bunch of that.
So for those of you that are not familiar, Promevo is a premier Google Cloud partner. We specialize in helping companies like yours get the most out of your Google Cloud stack, and that includes Workspace, that includes Google Cloud Platform, some of the data analytics products like BigQuery and Looker, and of course Google's suite of AI tools, Gemini, Vertex, and all of those.
We also offer our Google Workspace user management and reporting platform, gPanel, which gives you a whole slew of tools. Powerful automation reporting and this omniscient control over your Google Workspace instance in your users.
So this is the first in a four part series. If you've registered for this one, then you've registered for all of them.
Obviously, somewhere in here, we hope that one of these four are really going to hit home with you. But we really think that these four topics that we're going to hit one per month are going to be extremely relevant to you.
Today, obviously, we're focusing on AI for your employees, AI as an enablement tool within your business.
Next month. August 20th, we're going to talk about AI in your products and your services, and we're going to have an expert from Google joining us for that one. Those are things like, hey, is AI right for every business? Pick any kind of business; is there a way that AI can help enable the customer experience? Can it enable even like some of your employees tools? So we're going to dive into that on the 20th.
September 17th is going to be all about. Proper AI adoption, regulation, management, things like you've brought in all these AI tools. How do you make sure that you're getting the most of it? Companies are going to invest a bunch of money in giving employees these tools. How do you make sure they're using it? And how do you keep an eye on it? How do you ensure that they're being like, you're getting the most bang for your buck?
We're going to touch on some of that today, but September 17th is where we're really diving in on that one.
And then October 22nd, we're going to bring it all together.
We're going to have a handful of companies that again, like Jay at Spotify, have been in the mix with AI they've brought it in. We're gonna talk about what their experience is what they've learned what worked, what didn't. So we hope you'll join us for all four of them And of course, everything will be recorded and shared out afterward.
So with that I think you guys are tired of probably hearing me talk. Let's let's have a discussion here guys.
I think right off the top, the thing that a lot of people are wondering, what I would say is AI is coming to your organization. AI is already in your organization, right? One way or another.
And if you haven't brought in, A like employee managed, sorry, employer managed AI service. You've probably got a bunch of employees out there that have set up their own instances of ChatGPT or Claud or, even like the Gemini that comes with Gmail. I'm curious to get you guys as perspectives on this.
We'll start with you, Colin. What is it? Is it okay to have a BYOAI set up in, in as a company?
Colin McCarthy:
"Bring your own" anything has never really been allowed. BYOD is often done with strict controls and an MDm. But bring your own shadow IT or bring your own AI just sounds bring your own data leakage.
We have centralized applications for the reason that every admin on this call knows it is to have a centralized ability to put in, security controls, know what that vendor's posture is the controls, excuse me, that they have in place.
So I don't think, I don't think any sensible IT support technician, IT manager, IT director, CIO, CISO can turn a blind eye to BYOAI. You cannot bring your own. Platforms. They do have to be employer managed.
And I can see heads nodding. So I'm sure Jay feels the same way. And would warn against any of our community colleagues who weren't putting in blocks or being aware of where their accounts are being signed into and what data is being used.
Jay Feliciano:
Yeah, just to jump off of what Colin saying here. If AI is in your environment already, you're already way behind if you don't have policies, if you don't have anything in place to protect your data, because as everybody knows, free is never free.
So sure, these companies are using us to test their AI models out on us, training off of that data. And what are we giving up? What are we giving to them?
If we take some video tools we'll leave some video tool names out of this, they have your likeness, they have your voice, they have the data that's processing through. If you're taking notes, they have your notes. All that's in that tiny fine print that none of us ever read.
I highly encourage people to do that, but yeah, it's it's employer managed, in my opinion, is the only way to go with AI.
Colin McCarthy:
Yeah. Yeah. Cause you know, certainly with Google Workspace and Gemini, when you have that license assigned, you know that anything you do is not shared. It's protected. It's got the same security controls in place.
It's built on the same infrastructure. I'm always cautious, concerned back in my previous admin days about, allowing users to, a OAuth in and give rights over Google Drive. Which often people will do because it's a pop up that prompts you to do it. And you think it's a good way because it's providing you with the tools that you need and the access that you need, but it also could be oversharing documents like some applications will say, oh, we only want the permissions to read or write to documents that we create when it's read and write to your whole Google Drive account, that is a red flag.
And that's the same, whether we're talking about, AI or any application. But it comes down to the old longstanding arguments over shadow it and how much do you ignore how much, but I think companies can not ignore AI. when it's being used to shadow IT.
John Pettit:
I would wager that most organizations are behind on coming up with policies for AI.
Their boards and their CEOs are extremely enthusiastic about seeing AI make its way into their organization to create more productivity, right? Just help with new gains. Everybody's asking them what their strategy is. But to me, it's just an extension of your information security management system.
You still have to govern it. Like your policy, you just have to map. What's using what and what data access and scopes do they have to be able to measure it, right? Who's doing what? How often is it used? You have to be able to manage this on a regular basis.
I'd be curious to know how many people out there have even begun to start with a written policy on AI that's been communicated to employees as part of their regular training or education. I'd also be curious to know how many organizations have begun to create like an AI committee or review group, right? So as people want to bring technology in, they can do it safely, transparently, along with any other software.
Because you may have already approved something like Asana, and they're adding AI into it. And they're silently passing terms to you saying, this is how we're going to use your data. And so you may feel comfortable in the past with how their data goes. What's being processed, but are you still comfortable with how the scope of AI is going to process it? So I think these are areas people have to take a step back and get engaged.
Brandon Carter:
Sorry, without necessarily drilling into each AI tool that's out there, are there some that maybe are better than others? Or is it like, no, you're using a free AI tool, and that's a liability for us as an organization, end the discussion.
John Pettit:
I would say it is because you don't get indemnity for free, right?
So your IP controls you're creating things with AI, right? So everything you create has IP potential liability. And if you have no indemnity and you have no proof of how these things came into play, how they trained it, I think free AI tools are dangerous. It'd be really narrow in scope before you allow to use them.
Yeah. I think you need to make sure you understand your terms of service and you're paying for a corporate AI system.
Colin McCarthy:
Yeah. Yeah. Yeah. And it's interesting you mentioned about policies and having that update to your acceptable use policy. That is the first place to start, even if you don't have a strategy on how you're going to technologically manage and assign licenses and how you're going to evaluate.
I don't think any admin would block everything. It would be very hard to block everything because there's people would find a way around. But yeah, a policy on usage is the first step.
And I'm sure there's a lot of companies who've made mistakes. We've all seen the headlines of lawyers quoting hallucinations in AI when they're in front of a judge. We've heard of a large electronics company where their engineers put, confidential information in, and it all became public or indexed and was used to train the model. So yeah, first thing is to make a policy template.
The second, I bet the first question is how do I create a policy? Turn to an AI platform and create the policy.
Jay Feliciano:
Use the AI to control the AI.
Colin McCarthy:
Yes. Yeah. Yeah. You just need one license assigned to yourself and your first query, as long as it's, crafted correctly in, in the form of persona task and content.
Jay Feliciano:
Colin raises a really good thing because self admittedly, I am neurodivergent.
So it's very difficult for me to start a policy, a template, especially starting from blank on something that's so new. Where do I start? How do I begin? Usually we turn to our other colleagues and we try to figure out Hey, what did you do? What did you do? But this market so new that a lot of people are in the same spot that we are all in.
So yeah, admittedly, my first draft was based off of what Gemini told me that I needed to be considerate of in the current AI landscape, and it was a great starting point.
I'm not going to say that it was perfect because it doesn't have all the context of the company, doesn't have the kinds of the ways that we work, what you know, what security policies do we have in place today? But it gave me enough of a fantastic starting point. I could be like, okay I could fill in all the blanks here now. It was great.
And yeah, it's morphed and grown leaps and bounds since then. But if you don't have one today, perfect place to start.
Brandon Carter:
Yeah, I think that's a, we had a question dropped in the comments about, is there a sample policy that, the viewer can be pointed to for K 12 schools.
Based on what you just said, Jay, I'm guessing the answer to that is probably yes. You can probably have your AI tool get you a start with it.
That may be as good as anything, because is it fair to say for stuff like this, we're the wild west still. There's not necessarily a governing body or best practices organization for stuff like an AI use template for any kind of organization? Is that a fair statement?
John Pettit:
There's frameworks.
NIST created and rolled out their AIRMF. And it's, a million pages long, and you can use AI to digest and dissect it and help you find gaps in your existing, information security management system and add on to it. I would think for K 12, you have a couple people you would include in the policy.
You have internal staff and teachers, and then you have students, and you probably have policies that are directed at both in terms of plagiarism things like that people are creating. You probably need to include a group of people and use, use AI tools to draft policies.
Colin, you worked in education in the past what are your thoughts?
Colin McCarthy:
Yeah, it is, AI in education is a very hot topic. I am married to a previous educator. All of our friends and family are in education. It's a controversial topic inside education, the use of AI.
And it is a good resource for the educators to create lesson plans, create very engaging dynamic content that a lot of modern students need.
However, there is, that horrible flip side where students will just use AI to do their work. But I think towards the end of their school career or in college, as Jay and I and other companies are deploying AI tools, that it is a skill, just like learning to touch-type back in the day and learning how to use Windows or Mac or Linux.
And, it is going to be a skill. So it needs to be taught at the correct level in education. I think I would hope that, maybe I know the government has done some work on having us off AI or some policies. I would hope the Department of Education has a template that other schools can choose and can use.
Yeah, it's a tough one.
Brandon Carter:
To bring it back to the business world. I think this is similar to when the smartphone really began proliferating. Can't keep them out of school. Maybe, you can keep out of schools. Is that the right way? Because, kids are going to, they're getting smartphones, they're getting smartwatches earlier and earlier.
We're seeing that happen in parallel with AI and the conversation for employees is not all that different than it was for students or really anybody.
I want to talk about the structure we were hinting on earlier and the structure of the AI within an organization. And the question here is is there a, do you need to have a Chief AI Officer?
Because based on some of the comments that you guys were making earlier, like part of this definitely involves security, but I'm also hearing there's a lot of different departments that are involved in here. What is it, do people need a Chief AI Officer or in your experience, like how does AI fit within like a management structure?
Colin McCarthy:
Yeah, and there's two different parts to AI. There's the end user AI tools and then there's the infrastructure AI tools that platforms, companies integrate in their own system. I would say probably incredibly large companies do need a Chief AI Officer.
A lot of companies have Chief Data Officers. Is the Data Officer role gonna adapt and be Data AI Officer? Because normally AI, certainly Vertex AI and Google Cloud is used to analyze their corporate data.
John Pettit:
I think it's an interesting question of AI is transforming and some organizations have achieved, transformation. So they're responsible for initiatives that are meant to transform the operations of the business. And so you could see it falling into that.
If you don't have a Chief Transformation Officer, this is a blend of roles of security, of data, of operations. And so you're going to need CEO sponsorship.
Jay Feliciano:
You both hit what I was thinking right on the head and pulling, pulling both those together.
I really think this comes down to the maturity of the org. Where is the organization in their health, in regards to security policy management? Do you have an office of data protection? Do you have a security team that's specifically looking at where your data is going? Is it all or is it all on the shoulders of Sally, IT manager, who's there and her CEO was just like, we need AI, make it happen. That's not fair. So it's it's really comes down to where in the org you are.
I can share, if we have a, if we have a couple of seconds, I can share like how we're managing it today. And it's exactly like what John and Colin are saying, it's a mixture of teams, individuals, it's partnerships, it's relationships.
So right now over our AI governance, we'll use my team as an example, my team hosts a lot of tools that have AI components in it. Between our tech procurement team, our security team, our legal team, our data protection team, HR, we all collectively can make that decision together. So do I see a role for Chief AI Officer in the future for large companies? Absolutely. Is it realistic for every company? Probably not. But I would say like the relationships and getting it on all these teams.
roadmaps to figure out how they each want to handle it is probably the starting point that I would suggest everybody jumps to. It's let everybody know that AI is here. Let everybody know what it can do. Let everybody know the danger and the benefits.
Honestly, scare them. That's what I do with my leads.
Brandon Carter:
Is it fair to say, because I don't, There is the, like, Spotify and a lot of bigger companies. They're very like They have the resources to try to get like the details of every aspect of this nailed out.
Colin, if you're a small company and you just want to put AI in the hands of your employer should you be intimidated? Should you be like scared? Do you have to have all of these ducks lined up?
Colin McCarthy:
I don't think you have to have all of them lined up. And you should be putting a managed, controlled, licensed, protected version of an AI, regardless of what platform you're on, in the hands of your end users.
Because if you don't, they'll go and find something, and then you'll have no visibility, no control. And God knows what will happen. And you don't want to be involved in those. "God knows what happened" discussions that will happen eventually.
We said, I think the stepping stones to this is create a policy on general AI usage, say you're only supposed to use approved platforms that you're going to be licensed for and, and then pick whatever platform you want.
Work with those vendors, work with a partner like ourselves, and not just give it to people, but actually do a deployment do training sessions. Because it is just Excel was installed to millions of people in, 1993 or 2007, or when we went from the Microsoft Ribbon, whenever that came out in 2007, it was a new tool that people had to be taught how to, I'm trying to think of historical things where everybody thought they knew how to use it because it was technology, but it was, a new platform.
So learning to correctly integrate and operate with an AI platform is a new skill. So you can't just assign it to your users without giving them training and having some deployment workshops. And I know, Jay, you've done some deployments and work tool on all of that, I believe.
Jay Feliciano:
Yeah, so I can talk to that one too.
Same thing, like what I was saying, you can only gate it so long before you have to do something. And we saw, we, the approach that we initially took is to see what problems folks were trying to solve with AI. And it's funny, one of the biggest ones, believe it or not, was transcripts. AI note taking for meetings.
And, I know Zoom came out of the gate with, I forget what their product like Zoom One or something like that, where they came out hot and heavy with AI recording. It's not free.
There's a ton of if you look on if you look in the Google App Store, there are tons of add ons and plugins all "free."
So needless to say, I decided one day just for shits and giggles to go through and I looked in the back end to see what we OAuthed, what people have been OAuthing and allowing in. After visiting the ER to take care of my heart attack that I had. We sat and we said okay, if you look at the scopes, they were, it was a gross overreach and what these apps can do.
I'm not going to disparage any of these companies and say that's what they are doing. Our users were allowing them to the ability to consume our data to take all this from us.
So we saw, okay, we have an immediate need that we have to fill. And that's what we partnered with, and that's why I suggest what everybody does partner with your resellers. These fine gentlemen right here already know what it's going to take to deploy this for your company and how to do it safely.
And that's the route that we went. We went with our reseller and we formed a whole, I would say, I don't want to call it a partnership, but we got with Google, we got with our reseller, we created an entire campaign on how to get Gemini. to our users.
And at that point, we created a whole roadmap of how we want to roll this out, a whole test bed of users rolling it out in like in, in chunks of people, get it, getting that feedback while we're rolling back. I think the whole thing took maybe about a month and a half worth of that and gamifying it, making it fun.
That's the big one, make it fun. And then you're going to find out one important thing out of all of that is that no one knows what they really want and need from AI. Every single person has a different use case as a different way that they think that it should work, that the way that they believe that it should work.
And that's where it got a lot of fun for us because now we're seeing where it's like, oh, okay. So this may not work for you, but this would work for you. Or like finding out like, Oh, you'll hear two different people. This worked amazing for me. I love Gemini. And you hear someone else: I found no use of it.
And now you want to start digging into that. Like why? What? How can one find it? And the other one not?
Colin McCarthy:
Yeah, it's a perfect catalyst for cross departmental, cross team collaboration. A lot of the workshops that we've been running has multiple participants from different workshops, and it's great for them to be able to share how they're using it.
And I even found a way of utilizing it while I was demoing it that I hadn't realized I could do. And so it is one of the, it's not something that you can just, open up, put in a prompt, you get nothing back. You think, oh, it doesn't work for me.
It is a constant thing that does require training on. I'm always reminded of the very old phrase that we used to use back in the eighties and probably nineties was garbage in garbage out. With IT, initially, that's what we always talk about. If you put garbage into your computer system, you're gonna get garbage out. It's exactly the same way with prompt generation.
If you're not clear and have some clue about what you're going to asking and how you want to get your response back, you're not going to get what you want.
Yeah, it's good to hear that you've also done that phased rollout as well, Jay, because support also have to be brought up to speed. And being the champions, they're going to be the prompt masters because everybody's going to open a ticket saying I did this prompt in, Google Docs and didn't Yeah, the response I wanted, Gemini is broken for me. Can you fix it?
Jay Feliciano:
Yeah, it's very true. It's very true. And I just wanted to call out real quick.
So we spoke about how AI can help us get out of our way, and how we can do things faster. So for the person that in the chat asked for a K through 12 policy sample, I just got, I just generated that for you. I sent it to Brandon and to Colin so that they can share that with you.
But that's exactly, yeah, you have to do this in phases. You have to drive adoption because this is a tool. It seems simple because you just get this cool little line. What do you want to do today?
Yeah, that's such a huge question. I want to do so many things. Yeah. And that's what we need to help these folks.
Colin McCarthy:
Yeah, the scrolling prompts that go through Gemini when you fire it up aren't really a very well crafted prompt.
Studies have found that prompts of 21 words or more are a lot more efficient, give you a much better response.
So yeah, and as the other question in the chat about how do you drive, engagement. It is to find those champions inside your organization and turbocharge them with the skills, give them some of the resources, and then use them to help grow adoption organically through the organization.
That's what we've seen, for any change, any deployment, certainly for a Google Workspace or any tool change.
Brandon Carter:
Yeah. Obviously you saw the question in there.
Thank you, Stephen, for engaging. We're glad it's a quiet crowd there. So we appreciate you jumping in with questions. I will forward you, Stephen, the file that Jay just sent me.
But yeah, to just put a sort of a period on that one. Stephen is asking like what sort of engagement for companies that don't have like the ability to add a headcount for an AI officer. And I think Jay, you mentioned earlier, work with your reseller. If you don't have a reseller, contact us at Promevo. We're happy to help you with that.
You will work with Colin and people on his team directly to roll it out, and they'll help you, answer all your questions and, create a couple of like champion users that you can use to lead it across your organization and then train the whole group up. And is that, did I characterize that correctly, Colin?
Colin McCarthy:
Yeah. Yeah. Yeah. That's what we'll do. And we'll, yeah we're on a series of workshops. Help your users learn how to do the prompts. Encourage them to use it in every aspect that they do. It is a new skill to learn.
And whenever you fire up a new Google Doc, as Jay said, to, write a Policy, turn to Gemini have it create a template or a document to edit. I've done that for a lot of my policy documents and emails in the last couple of months. I would have never been able to written them all from a blank piece of paper.
And the one thing that we've seen through all of the workshops is very consistent in the surveys that we send out. Virtually every single participant and there's been hundreds of them come back and say, I've saved time and the quality of my work has increased.
And if you're in the client service industry, professional services, any job, you always want to be doing better work in a shorter amount of time or have more time to do better work. And that's the one thing that AI platforms give you.
John Pettit:
Also working with a partner. A lot of times they have access to funding, especially now, to make this cost nothing for you practically to get a couple of experts in there to help you lead this transformation and work with you on it.
If you're serious about evaluating it and bringing this into your organization, yeah, definitely reach out to Promevo or another partner. Again, it doesn't have to be something expensive. It can be something that can come in there and get you the expertise you need.
Brandon Carter:
And as a reminder, we are going in depth on this topic in September.
Hate to make you wait two months but that's, we're gonna, like Colin will talk through that process and what's the what are the best practices whether you're working with a reseller or not. We're going to go deep into that one in our September webinar. So again, everyone that rsvp'd here will be registered for that one. So we'd love to see you.
Kind of going in a slightly different direction. I'm curious about who in an organization is AI right for?
As the marketing guy, it makes sense because I'm generating content, emails, blogs, things like that all the time. Are there like, in your opinion, are there people that maybe shouldn't have it?
I'm curious, Jay, like in your, in, in, as Spotify got a hold of its tools, is there a decision making process there? Is it like, everyone gets access to it. We're going to show you how it works and it's up to you to use it. Or were you like, no, there's certain people we're not going to roll it out to them.
Jay Feliciano:
Great question.
I think we took the question a little like one step further and is one AI tool right for all employees?
Because what like, and the clear division and in Spotify right now, and I think you're gonna find this amongst any engineering orgs, generative AI and the help me code and clean all this up for me AI.
Those two worlds are Clearly divided down the middle. And I think treating it like that and treating like what problems are you trying to solve for will help answer that question? I think for your organization.
I think at some fundamental level, I think everyone should have access to AI tooling. It may just be within an application like a CODA or an Asana or something like that. Maybe some folks need to take it a step further. They need access to the ChatGPT, so the Gemini's of the world, the larger, bigger, like I need to do bigger brain things. Or you can have other people that just need an AI tool just to help them do just simple searching.
AI is powerful. It's here. And I think we should really be worried about not what it can do for employees, be worried what it can do to our employees.
Brandon Carter:
That's a great point. Yeah. And, I obviously I changed the little, the question down there in the bottom to, I think, to your point, AI is right for all employees, but it may not necessarily be the same tools across an organization.
I don't know, like Colin, in your opinion, are there teams that it really seems to connect with more than others? It's just like from the sense of AI in general, like they're they get it. They're ready for it.
Colin McCarthy:
Yeah, I think certainly in the workshops and feedback that we've got so far, those are the client facing who have really benefited from it.
Specifically using Gemini and Gmail to help them craft much more professional emails out to prospects or, returning existing clients content creators have used that an awful lot to, to help them.
We have worked with a lot of IT administrators that have used it for the creation of policies. So it is, through those people it can be very handy.
Brandon, you mentioned, support people. As Jay said, a lot of these tools do have AI built in a lot of it, a lot of the ticketing systems. So, the support people who are solely working in their ticketing platform might not need a I license assigned to their Workspace or Office 365 identity, but certainly need access to it inside those platforms.
I don't think any of these tools do everything perfectly. The creatives might need a special tool. The coders obviously do need a dedicated tool. I know there's the Gemini for Coders in Google Cloud. Certainly all of your GCP developers should be having that. And I know there are special tools to do business analytics.
There are shortcomings in every platform, Google Sheets, isn't great. The analysis we could do at that isn't great at this moment. So is it right for a finance person to have Gemini and Google Sheets or would they be better to have, a more analytical trained AI model to for their needs.
Brandon Carter:
Let's get weird with it for a second. Cause we, we know the Gemini, Google's got like a Gemini open AI. There's these like prebuilt AI language models, but you can build your own.
With all the things we've talked about building your own, is that realistic? Is that an option? Is that something that an organization needs to legitimately look at in you guys' opinion?
John Pettit:
I think the line of build versus buy, there's enough complexity and a lot of people are already coming in across the ecosystem with new AI.
You should definitely look at buy first. You shouldn't jump out the gate and say we have to build this because there's a lot of free air out there and you can integrate with pretty much all of them. But there are some that you know you may add value to your business, but you have to look at the ROI on projects. Like what are you getting out of it commercially in terms of actual time savings?
I've seen a lot of projects start and get stuck in experimentation on the build side. It's not as straightforward a path yet to just simply, I'm going to build my own AI system. There's lots of ways to do it. There's a lot of documentation on it, but it's not a simple task, and it's not simple to maintain.
Jay Feliciano:
And cost, right? What's the cost to host AI now? It's, we see that these companies are adding it and the costs year over year are going up with a lot of the SaaS tools that are out there. So how does, so do we just pay that fee? Do we take that undertaking? Like it's a very, I think, company dependent line item, right?
Like, where are you in your journey? What can you afford to spend? That's why I like, again, like I'm going to go back and say like why I love Gemini and I really enjoy Google's approach to AI right now. For the things that they're doing positively. They're addressing the needs for different for different work types.
And Colin came up with a really good, with a good one earlier where it's, when you need to, if you have a finance person that needs deeper analytical tools, right? Yeah, we can go outside and we can buy this stuff. We could also build something. But like with Gemini, there's like that in between where it's you have the data already, you have the tools, you have AI functionality. Now what can you start mixing into a glass to come out with what you're actually looking for?
And John mentioned something yesterday in a convo that we were having around like vertexes and like using the Gemini functions there and build, building that and training it on your own data. So now you have data that you trained, that you have control over, no one else, and you can deploy a cool AI feature for your company that's tailored to you that you didn't have to buy. Again, work with your resellers, folks.
Colin McCarthy:
Yeah. I guess it depends on the question is, are you building your own LLM, large language model, or are you building your own integration using established, models like Gemini to enhance your own internal tools as opposed to buying something on the top.
And I, yeah. That's a difficult strategic question for your Chief Product Officer, your Chief Strategy Officer, your CTO, your CISO, your Chief AI Officer who's only just started, so it will be I think I might get AI to write me a job description and maybe Jay will get to do it, he's a quicker typer than I am.
What is the actual job description and job posting for a Chief AI Officer? What do they need? Yeah.
John Pettit:
Even with your LLM, you can take your data and make Gemini bespoke to your data, right? So like you can take a generalized model, cloud, whatever, Llama, and you can then print it down using your training inputs without having to do a full retraining and have spent 10 million dollars.
So you have paths. I think that's a great point, Jay. You have paths to get better accuracy and things that benefit your business that you can integrate with, but you still have a strategic decision.
Brandon Carter:
And a good again, I don't want to make this one big long sales pitch for other webinars or Promevo, but this is the topic that we're really going to dive into next month.
We're going to get into really imbuing AI into your product, your service, your company, and using some of the other models, like Jay mentioned, Vertex that is really meant for your customization. And we've got one of the main AI gurus at Google's going to join us. We're going to bring on our own AI, our own developers to talk through it. So just another pitch for that one.
We have a few more minutes here. I want to get into kind of take us back into the employee world and then we'll wrap it up. But I think we've hinted at a lot of this, but I'm curious about the ethical side of AI.
And I'm thinking of instances. Maybe these are far reaching instances or sorry, like reaches, but employees may be relying on AI in their emails to each other. Like maybe you lose some of your voice. There's also malicious things like employees pretending to be other employees, employees who might generate a potentially like offensive image, or something like that.
Going back into like regulation, like how do we, how do you prevent this stuff? Is it possible? What are the things that people need to be aware of when you're handing a tool that's got infinite creativity capabilities to it? How do you make sure that the usage stays on the up and up? Or can you?
John Pettit:
I'm going to say one thing on it real quick, and then I'll these guys to talk about it.
But the policy stuff is important, right? Just because you have a tool, you're not allowed to be abusive. You're not allowed to do things offensive, right? You're not allowed to pass work off as your own. Those are already part of your own, hopefully, HR policies in your organization, and this is rooted in it. And you should already have a pathway to report abuse and be treated correctly. Using a tool does not free you from responsibility or accountability.
It isn't, oh this is what AI gave me. No, it's you. You did it. You're responsible for it. You can't just, be hands off.
So I think you just gotta be really clear to people. Like we're giving you a powerful tool. This is like the internet. Everybody has access to information. Now everybody has access to productivity in a real way. You have to be responsible in how you use it and you're going to be accountable with what you do with it.
Colin McCarthy:
In the workshops the we do for clients and the reports that we produce, I do put there that this report was created with the help of Ai. We are up front with people.
When you posed that question, Brandon, my mind was instantly going to security and end user training. That had been it. instances of, or of AI being used for improved phishing. There has been, even another company that I previously worked with, somebody tried to spoof the CEO with a fake Teams call with a, with one of the subsidiaries to try and get them to with make a withdrawal for a huge amount of cash.
So that, that would be, that would wear on the ethical side. That's what I would ramp up. The security training, phishing training, awareness training, your security, your financial policies, the people have, multiple signatures, multiple approvals.
And it's, proof of life approvals, not just, oh, John said yes. Did John pass over, the color of the day, which is written up in the, the office notice board or something. It's got, it's going to get to that government nuclear key type security that we have and financial institutions need to have because AI, when it's used incorrectly, will make the bad actors be much more efficient and easier bad actors. Just like AI will make your great team members even better, more productive, and they call it better quality team members.
Jay Feliciano:
Yeah, I think my two takeaways from this is one: partner with a company that has ethics, right? That's and you're super well within your rights. what are your ethics behind this? Show me like, what do you allow? What don't you allow?
And the other component of this that I see happening, I say, probably within the next two to maybe four years, we're going to see a lot of government body control over AI.
We seen right now, I believe just recently I think unanimously we voted to, two control AI image manipulation and what it can do. I think we're going to start seeing more things like that when it starts harming is when we'll see more control, more things put in place.
Like right now, I think someone said earlier, it's, I don't want to equate it to the Wild West cause there was some crazy nonsense going on there, but we are on the digital frontier of that right now.
John Pettit:
You just sparked a thought on there about the, there's a bank I use.
But I call it in and they say, just say my voice is my password, right? And it lets you in. I don't want to like, and I know now with AI, they already drew from 10 seconds of my voice from somebody just calling or talking to them that they can reproduce that. Like that. There's a lot of controls that need to be put in place across the industry.
You just triggered that with the financial controls.
Colin McCarthy:
Yeah. Yeah. And there's so much content of us around on, people put their own stuff on Instagram. So even if you're not a YouTube star or a podcast there where your voice could be harvested, if you've done everything, anything that's publicly accessible on social media, they could train your voice and then use that to bypass another system that you have in place.
So yeah, some worries there to keep us up at night.
Brandon Carter:
We're coming up on the time. I, yeah, I think the, if I can summarize here, like it's good to have a partner and it's good to have, someone to help you through some of these dilemmas, some of these questions that are going to come up. And to make sure that your organization is in the right position for it, whether you are a pretty big, well known company like Spotify or a startup with five people.
Before we go, again, we have just a couple minutes left. I'm curious to get, does anyone want to proffer an opinion on what's the future here? We know it's a little Wild West-y right now. We're not going to go full Wild West, but we know it's, there's a lot of things that are up in the air.
If you could time machine forward a year or two... it could be three months from now, as quickly as things are changing. What does this look like? Is it still just like you have your employee enablement productivity tools, or is there like a whole other version of this that may be coming down the pipeline?
John Pettit:
I believe you're not going to buy productivity tools. I believe you're gonna end up buying an AI tool or set of AI tools for your organization. I think productivity was a stop gap in terms of making people more productive, going from paper to digital. And now we're going to ask, so AI is going to be the thing you consider as the key technology empowering your business.
And you're going to have to be really thoughtful about the ones you bring in place. And if you believe that the people, engineers who work on open AI and other stuff, they see AGI in two to three years. They see nothing really stopping us from getting to a place where AIs are talking to AIs, and, you have specialized AI agents.
And think we can't forecast what we've never seen as a world beyond AGI, these jobs, what we do, how we interface with it. But we do need to start figuring out how we put these things in place for our organizations responsibly. I think the ones who don't just may not exist. It's just, it's important time to do it.
Colin McCarthy:
Yeah. To echo what John said, I think this will change collaboration platforms. I think this will be as big a change as Slack always said their platform, was going to change how we viewed communication and get rid of email. We still use email, but do I need to monitor my inbox when, Gemini can summarize my inbox and tell me what my action items are.
So I think a lot of business processes can be enhanced and automated. I don't want, an AI Colin to be talking to an AI Jay. There, there is no value in that. I think we'll still be able to do, a lot better work talking directly and collaborating and have those discussions.
Yeah, it'll be interesting to see what does evolve.
Brandon Carter:
Jay, any final thoughts?
Jay Feliciano:
Yeah, sure. I see a world where we're going to be creating new careers, new ways to manage our work, right? I'm living in, if I want to live in a future where my workspace is AI driven, I am at the head of it, I'm in control of it.
I'm having it do the things that would take me. You add up the 5, 10, 20 minute tasks that you have to do throughout a day. And I can reclaim all that time back to be more creative, to be more forward thinking and not live in that firefighting mode that I think a lot of us tend to be in.
I see some big things coming. I, the positive outweighs any potential negatives that I can see. I don't think people need to be afraid of AI. I think they just need to be afraid of not being left behind.
Brandon Carter:
And that is a perfect point to wrap it up on.
Jay, John, Colin, awesome, like great stuff in there, really crunchy, like stuff to really think on and chew it on. Anyone else that's out there that wants to like get on this journey, reach out to us at Promevo.com. That's what we do. And we're happy to help.
In the meantime, we'll see you at the next one in a month and thanks again to our presenters and thanks to all of you and have a great day.
Presenters
Choose your Google Workspace edition. Try it free for 14 days.
- Gmail
- Drive
- Meet
- Calendar
- Chat
- Docs
- Sheets
- Slides
- Keep
- Sites
- Forms
Custom and secure business email
100 participant video meetings
30 GB cloud storage per user
Security and management controls
Standard Support
Custom and secure business email
150 participant video meetings + recordings
2 TB cloud storage per user
Security and management controls
Standard Support (paid upgrade to Enhance Support)
Custom and secure business email + eDiscovery, retention
250 participant video meetings + recordings, attendance tracking
5 TB cloud storage per user
Enhanced security and management controls, including Vault and advanced endpoint management
Standard Support (paid upgrade to Enhance Support)
Custom and secure business email + eDiscovery, retention, S/MIME encryption
250 participant video meetings + recordings, attendance tracking noise cancellation, in-domain live streaming
As much storage as you need
Advanced security and management and compliance controls, including Vault, DLP, data regions, and enterprise endpoint management
Enhanced Support (paid upgrade to Premium Support)
Business Starter, Business Standard, and Business Plus plans can be purchased for a maximum of 300 users. There is no minimum or maximum user limit for Enterprise plans.
Contact Sales