how to start with AI. And many people are now in the process of starting something with AI, but it's like building a house and they build stuff on top and they don't look at the foundations. The foundations of the house are really important to the house that the house stands and for a firm, the foundations are important that you really can make money out of it. And you might have realized you use AI in your firm, you do this, you do that, but somehow
In the big picture, things don't move as you expected. How can you use AI that things really move? And yeah, I have today Colin Graves, CEO and founder of North Labs, who can actually help us in giving us tips on how to really implement AI. Welcome to another episode of the Beginner's Guide to AI.
another interviewed episode and on the microphone it's tietma from agrout Berlin. And now let's just jump right into the interview. I could talk a lot about Colin but the best thing is he tells you something about himself and the most important thing is how did you get into AI Colin?
Yeah, it's a it's a long story, but I'll keep it brief. So I've been in the cloud data analytics world for the past 17 years. This is the second company I've run in this space. But I don't come from a traditional.
background, I guess. I was actually in the military with the US Air Force and NATO living in the US, living in Germany, flying all over the world when I first learned about this cool, brand new thing called Amazon Web Services back in 2007. And
For whatever reason, I was not working on computers in the military. I was turning wrenches on big airplanes and securing perimeters for crazy stuff in Africa and things like that. And for whatever reason, my father sent me an article, I think from the Wall Street Journal, about this new concept called the cloud that Amazon was doing,
And for whatever reason, I decided right there, oh, I should learn about this. I think this is going to be a big deal. So I spent, you know, the remainder of my time in the military, uh, learning all I could. I would, you know, when we were flying on aircraft, I really had no responsibilities. My responsibilities were always when the, when the airplane was on the ground or maintenance or for security or things like that. So I would.
string up a hammock in the back of a C-130. And I would learn about the cloud and I would learn programming fundamentals and what it means to write Python code and things like that. And it, you know, led me to become one of the first 10 fully certified AWS folks in the world.
The first company I started was very early on in AWS when it was not really that popular to talk about the cloud. And AI has become a natural extension of that. I think it's the next frontier of this pursuit of data maturity or this pursuit of operational excellence.
that we're applying data toward within businesses. And so it's been a lot of fun. I haven't really seen a wave of popularity like this since those early days of the cloud. And it's quite remarkable to be able to participate in it.
That is actually a great point that you say that the transition from cloud to everything with AI is like natural. And it's actually a good transition also to, and now I was at the case study website, part of the North Labs website. And it's like a lot of stuff with AWS as well. So I think one season the cases, yeah, that is the essential part is data. And that's what you do basically with North Labs, huh?
It is, right. I mean, whenever we talk about AI, and I love that it's so interesting to so many people, and we talk to so many organizations who are looking to leverage AI to build better businesses, to give their employees time back and to make better decisions and to automate away some of that stuff that
Employees are having to do today. That's just a huge waste of time. But the big philosophy that I stand on that I try and advocate for with our customers is AI is if you're if you think about building a house.
AI is this new second story that you're going to put on your house and you're going to turn your house into this wonderful forever home and you're going to have, you know, all the rooms you can ever want and you're going to have a really fun man cave upstairs or whatever the case may be. But if that's true, then the notion of
pouring the foundation and laying the rebar to make that home as sturdy as possible becomes even more important. And unfortunately, I guess fortunately for my team, but unfortunately for the industry, a lot of organizations tend to skip over that and jump straight into AI and end up in a position where their AI capabilities aren't as trustworthy as they could be.
because they're not built off of that unified collective source of truth within the business. And so, yeah, it's a natural extension for sure. But I think one of the main takeaways I try and leave people with is
you're really only as strong as your foundation. You would much rather have a smaller home on a really solid foundation than a huge home that's built on dirt. And that's unfortunately, we see that a bit in the market and we're doing what we can to help advocate for that foundational strength that will ultimately lead to better results with AI.
So the people come to you and say, OK, let's do AI. We heard about that. We read about that in TV. We want to do that. And then you go in and say, oh my god.
Well, yeah, and good on them for wanting to do it. It means that people are speaking with their peers. It means they're involved in these trends that are going on, and they're seeing what AI could do for their business. And so they're doing all the right things. The question becomes, how do we do this in a way that won't take us years, cost us millions of dollars, or worst case scenario,
do both of those, long time, lots of money, and then put us in a position where we need to start over. And I see that a lot. And so, you know, really our sort of
competitive advantage in the market or how we're positioned is, you know, North Labs is a team of software and data nerds. And we've figured out what I believe to be the fastest way to build that foundation for organizations. We can build it mostly automatically.
We have prebuilt data models that we can plug right into their data, do a little bit of tweaking, and they have that really solid foundation in a matter of days or weeks instead of months, quarters, years. And then they can start on these AI efforts. So we've developed a way to make that first chapter of the book as painless as possible.
But it doesn't mean it's not important. It just means that we're trying to commoditize that as much as possible so that we can then shift our focus on or shift our focus toward what could be most impactful for a business. But yeah, we see that sort of stuff all the time.
How is it happening then? You go in there and you just download everything they have or what data gets ingested by the AI. How can I imagine that?
Yeah, so we work within a couple of core industries. We could technically work with any industry, but really where our strengths are twofold. The first is manufacturing and industrial. That is the majority of our customer base.
I come from a family of manufacturing business owners. I'm originally from Minnesota, which is in the, you know, right near Canada and the central part of the US, which, you know, it's a huge manufacturing industry, not only for steel and aluminum and things like that, but agriculture, medical device. Um, you know, we have a lot of the big players in the medical device space in Minnesota.
And so what we do is we pre-build solutions for these organizations. So we have pre-built what we call domains that we can plug into this organization's data. So we suck their data in from the systems that matter.
whether that is an ERP system, a CRM system, manufacturing execution system, then we drop in these domains around production insights, around predictive maintenance, around scrap and throughput, sales and demand forecasting, whatever that might be.
So we don't need to build this stuff from scratch every single time. Our pre-built models get plopped in and we're already 80 or 90% of the way done. Then we tweak what we need to to make the models work best for the customers.
And then our second industry is higher education, which I never thought that I would be a part of. It's historically very slow. It's historically very inefficient. But we've developed machine learning models.
to assist these higher ed institutions with the areas they struggle most. Notably, how do I attract the best possible student? How do I get them to enroll? And then how do I get them to stick around until graduation? So we have different machine learning models that we can plug in to help identify what students are going to leave no matter what.
and what students are at risk of leaving who it might be worth spending some time, attention, and care with.
to get them to stick around. And then we handle a lot of the government compliance aspect of what they do as well, because that historically takes a ton of people doing very manual work to report to the government enrollment numbers and that sort of thing. And we've, we've automated that. Um, and then from there, you know, if, if a customer requests something that we don't have built,
We'll build it at no additional cost because the way we develop these domains, we have to build it only one time and then we can reuse it across our customer portfolio many, many times. And so that's, we don't charge anything extra to build out new domains because we know that first go is R and D. And then from there on out, we have this domain to reuse in the future. So, uh, but yeah, it's, it's sucking in the relevant systems. We help them identify that.
It's hooking up these models. It's doing any data quality work that might need to go on in order to make the domains work as well as they can. And then really it's customizing and adding on additional capabilities from there.
That's quite interesting. The part with the higher education is totally counterintuitive, but yeah, it makes sense. If you have some signals, I mean, it's like production. The higher education is nothing else than if one is really, yeah, it says it's also production. You produce people coming out.
And the other thing might be cars or like something else. And you just can see there's a pattern where things don't work. This is one thing that really interesting. That's really interesting. The other thing is the whole bureaucracy thing. Yeah, because it's always the same, but it's really, oh my God, going to the hundreds and maybe thousands of data sets for getting something out. Yeah, yeah.
Yeah, I mean, we look for examples in the market where you have multivariate systems. And so this is going to sound crazy, but higher education is very similar to manufacturing in that regard. In manufacturing, you have tons of raw inventory of raw materials that you need to bring in. Then they get sort of, if you think of a bow tie, right? It's big on both sides and narrow in the middle.
Lots of inventory raw materials coming in it all gets concentrated down into a production line. And then the other side of that bow tie is it's it's the distribution network and it's going out to customers.
For higher education, it's pretty similar. You have a wide outreach in terms of attraction and enrollment. Then the bow tie gets narrower when the student is actually in place, taking a course load, adding a major, whatever the case may be. And then afterwards, you get in more to the either.
Hey, wouldn't it be nice, Deepmar, if you could add a second degree or a more professional degree on top of what you've already done? Or, hey, you've successfully graduated. Well, guess what? Now we want you to turn, we want to turn you into an alumni donor. We want you to, right? So we have campaigns and efforts that go out.
So we can secure donations from you in the future to keep revenue coming in to the institution. So they're shaped pretty similar. And I think that's why we found success in those domains. There are other examples like retail that we plan to get into and CPG and things like that. But those two examples really, if you overlay them on top of one another, they're fairly similar.
So it makes sense. Everything that has this form of a bow tie from the model that makes sense to use like this. What is the biggest problem with the data you have there? Is it like, is the quality of a data problem to the access or how is this? Yeah.
We don't struggle with access that much anymore because we bring everything to the table for a customer. So when they sign on with us, we bring all of the people, all of the process, all of the technology, they would need to run these capabilities. We didn't used to do that.
We used to say, all right, you signed on with us, that's great, and like a more traditional services partner, you're going to grant us access to your systems, and we're going to build within your systems. As you might imagine, in manufacturing, especially the closer you get to, say, the defense sector, that becomes very difficult. And then higher education, because of the compliance and governance requirements there, that's also difficult.
You have different federal compliance that says, don't allow third party people to just come in and poke around in your data. That would be silly. So now that we bring all of the tools to the table, the access piece isn't as difficult. What I'll say is the biggest thing that we run into with customers is
helping, I guess, coming in and challenging the status quo a bit, because a lot of these organizations, say you're working with a $500 million a year manufacturer, they're probably already trying to do something with their data. They might not be doing it very successfully, but they're trying to do something. So they might have bought a few tools, they might have a few folks on the team who are working on this stuff, and then these North Labs yahoos come in and say, hey,
Forget what you've been doing. We're gonna do it our way. And so you run into some friction there until you take the time to train people on why it's better. And then the second piece that I find is a lot of organizations still tend to want to, the phrase is boil the ocean.
So what that means is somebody might come and say, hey, we want to be data driven within Fisher Industries, right? Go out and ingest all of our data across all of our systems and then we'll figure out what to do with it, right? But the problem with that is in the last eight years, at least for the manufacturing industry, the average number of tools, software tools within these organizations has doubled.
So you have a, you have crazy tool sprawl. And so you can't go into a transformation effort like this. Acting as though all of these sources are created equally. A lot of times when we see sprawl like that, what it means is that there are a lot of tools, but a small amount of each tool is being used. We call that tool overlap.
So where you might have been successful with one tool, now you have three, each of which are doing a third of the job. And so ingesting all that stuff takes a lot of time. So we work to really help the customer identify, okay, what are the 10% of your systems or the 5% of your systems that are going to be sort of the pack mules in this that are gonna carry the most weight?
and we ingest those immediately. And then we can ingest other one-off tools in the future. And from a data quality standpoint, it's a great question I get asked a lot. There are always going to be data quality issues. And I hear a lot of customers say, well, maybe we'll just take the next year and fix our data, and then we'll work with you.
Well, what does that even mean? Because data quality is a never ending battle. Anytime you have people involved, there are going to be data quality issues. But what I would argue is getting started building your data maturity actually serves as the litmus test and helps you focus in on where your data quality issues are the worst. Because it means that we're using your most important systems on your most important business problems.
trying to create the most important business insights. So now we might only be talking about three or five tools. So if we see the data being produced isn't quite accurate, now we have the ability to trace it back to where it's going wrong and then focus our efforts on fixing that piece. As opposed to saying, we're gonna fix our data, which obviously won't happen. So hopefully that helps.
I see the point, yeah. No, there's like, also, it's another question I already had in my head. You can't, this is like also this boiler ocean. You don't need to do everything at once. It's like you focus on the things where you are strong, where your data is strong, where your focus is strong, where you can make a strong use case with the data, probably.
Do you have some things you always see people can, like the low-hanging fruits, what's the easiest takeaways for the people? Yeah, so most of the time in manufacturing at least, and I'm going to use the term ERP, which is enterprise resource planning. Think of that just as like a giant software product, your SAPs, your oracles, your net suites, your Sage type of systems.
But this could apply to, you know, if a listener, you know, is like, well, we don't have any RP. Well, think of like your CRM system. That's pretty central to a lot of organizations, right? Most of the time, your most important systems have much higher data quality.
than secondary or tertiary systems, because a lot of people might not even know those tools exist. It might only exist within one function of the business or with a small handful of people within the business. But the ERP or the CRM is being used and touched by a lot of people. It's being looked at by the executives in the business. And so the likelihood that that data quality is at least moderately acceptable is pretty high.
Right. And most of the time that's where we want to start because that is the, that's the, the, the brain of the business is that one system. So, um, yeah, I think that's, that's typically where we start now from an AI standpoint. And maybe we're going to get into this. What I see a lot of organizations doing that I don't think they should do.
is adopting the AI capabilities of any given software product that they have. Every tool on the market right now is claiming to be AI this, AI that. We've got an assistant, we've got a co-pilot, we've got a this and a that. The issue with that is
You're going to have different large language models working within different contexts.
And thus having different understanding of your business and you have no way to unify that. And so theoretically, this is just a math thing, right? The context of any one of these given tools is going to be slightly different at any given time. And I believe that is dangerous. I believe that is when the cloud first kicked off,
I've been in this industry a long time. From 2010 through 2014, organizations had a blanket mandate. Just get it to the cloud. Just, I don't care what it looks like. I don't care where it lives, put it in the cloud. And they realized at the tail end of that time period that that was a bad idea because some
platforms, services, tools were not ready to go to the cloud. And so they spent enormous sums of money, enormous amounts of resource power to move these things that ultimately failed. And that's, I think, where the risk lies within these early days of Gennai is we're just going to use Gennai wherever it's being offered.
And we're going to make business dependent systems on these various tools, and we're not going to spend the time pouring that foundation of the house. And when that point in the future comes, when it's going to be, I don't know, but I know it will come where organizations say, ah, OK, this tool handles this, this tool handles this. But neither of them have a perfect representation of what the business is and what we do.
Now we're going to need to unwind things a bit to make sure that we have that unified knowledge base and that unified context. That will come in the future. I just don't know when.
Yeah, that's like the same with the internet. People went on the internet, then the cloud, then mobile, whatever. How to use that? Yeah, that's really, and it takes some time and many, many mistakes been made already, I guess. You actually took my question away. I just wanted to ask the biggest problem there, but you were right going for that.
Is there another question I have? Is there a problem? I mean, you have a system that takes the data, unifies it. How do I have to imagine that do people like jump on board? Do they get training? Are there problems with people adapting to it? Are there people that are afraid or something like that afraid of their jobs? Maybe, you know,
Yeah, no, it's a good question. The change management piece, I think, is a big one. On more of the foundational data side, right? So just unifying that data for the first time, like we were talking about, my group handles all of that on behalf of our customers. So most immature organizations will come to the table and say,
Oh yeah, you know, deep moral handle it. He's going to handle our data for our business and you go, okay, well, what's his day job? Well, he's a director of this, that and the other. Okay. So he has it. He has a full time job and he's probably working longer than an average work week and he given a week. And now you're going to tell him he's responsible for the data capabilities within the business. So that doesn't work.
That's a terrible idea because most executives just think of data capabilities as I have dashboards, I have reports, that sort of thing. But if you think of a data ecosystem like an iceberg, what do we know about icebergs? Most of the mass is under the water line and you can't see it. And that's where data security, data governance, performance, cost,
data operations, pipelining, ETL, jobs, or all of this stuff happens sort of behind the scenes, right? And most organizations are historically unaware of that. Unless they're very large organizations who have been running data programs for a long time, then they might have it figured out. And frankly, very large organizations can afford to have a team of 25, 50, 100 folks manning the ship.
But so we handle all of the sort of all the gross stuff that doesn't add ROI to our customers. So they can use the piece that can add ROI to their business. With AI, it's fairly similar, right? Now, unless you're just turning on Microsoft Copilot and pretending like that's a long staying solution, which I would argue it is not.
Right? You're going to need someone who owns that large language model for the business, who governs it, who secures it, who makes sure it's training itself properly so we don't run into hallucinations and things like that. And then you're going to need to run the change management process for end users. But to answer your question about sort of the trepidation or the anxiety around AI.
I can't remember who said this. So maybe we included in the footnotes or something, I'll find it and send it to you after. But the quote that I love is AI is not going to replace humans. Humans who know how to use AI are going to replace humans who don't. And I believe that is unequivocally true. Now, AI absolutely can and should automate a way
what I call undifferentiated heavy lifting, the stuff you have to do in your job that you hate doing and it takes a lot of time filling out a form manually or, you know what I mean, pulling out certain data fields from a form submission, whatever. Let's let AI do that. So the people involved.
can be pointed at business value and can feel more fulfilled in the work they're doing. Like AI should take care of that, I believe. But I don't think we're in a position anywhere in the near future, and maybe others disagree with me, where AI is just going to take people's jobs.
because we're still going to need human supervision on these systems. There's going to have to be a human in the loop somewhere.
for every job that runs. And so our job is to make people feel less anxiety when using AI and view it as the robin to their Batman. It can allow you to do your job and fulfill your mission better. And, you know, I think, I think employees, certainly, if you look at the statistics are coming around to that idea that AI can be a complimentary
thing for them as opposed to sort of an aggressor. That's definitely good news for the people out there. Another thing that came through in these questions, I found really interesting because you said like the direct director who just uses a dashboard or something, that sounds like pre-AI usage.
like using a new tool in the old ways or there were one or two other things like the iceberg example. It's also if you just see what you would expect from perspective 2020 or something. I find that really interesting that using new tools for all jobs doesn't really is not the usage we really need. Yeah, I think
You know, at the end of the day, change takes a long time. And when you think about the average age of a business owner or a business executive across the world, it's not just in the United States, it's not just in Germany, right? These are folks who have been in the industry a long time and certainly in manufacturing and industrial, these folks have built extremely successful businesses with their spidey senses.
because they've historically been the last group to adopt cutting-edge technology. Now, the popularity and the societal norm shifting is making them go, maybe that is worth us thinking about. But really, for me, success in my business and my industry
is bridging that gap. Like I'm not in the business of going and saying like, hey, Deepmar, I know you've run this successful business for the last 40 years. And now we're going to completely change how you do things and just trust us. Because guess what? I wouldn't have a business because the door would slam in my face over and over. So when you think about the old way, let's think about reporting dashboards, that sort of thing, who a lot of your listeners will go, that's so 2000s. And I agree.
Right? But how can generative AI help enhance the existing behaviors of that user base? Two ways that I see most frequently that are amazing. One, we can apply predictive models to those dashboards and reports out of the box.
So we're no longer in a position where my team is having to train a huge machine learning model for forecasting anymore.
A lot of it is coming right out of the box. If you use AWS, you use Azure, you use Google Cloud, it's coming out of the box. That's amazing because a lot of effort historically has gone into forecasting for these businesses and the accuracy of their forecasts have been very bad for a long time. So now we have reason to believe that overnight,
Our forecasting will be better. The second piece and the one that I think is the single biggest driver of executive adoption for gen AI in the market today is a term called natural language querying. And for your listeners who aren't familiar, think of this as the ability to almost make a Google search of your organization's data in plain English.
but then the system can respond to you either with plain English or can automatically create you a dashboard or a report, something that historically you might have had to wait weeks or months for depending on how busy your, your BI team was. Now you're getting an answer immediately. It's building a dashboard for you and you can save that dashboard for future use and it will update as your data updates.
That to me, bar none. Like, yes, conversational AI, chat, GPT, all that stuff, fine. But if we're talking about, hey, this question that I have isn't currently answered with our data capabilities. And now I have a means to answer that question in less than a minute, as opposed to three weeks or three months. Hey, yeah, boss, we'll get to it, right?
That is a game changer that will do nothing but promote adoption of these tools and will promote business velocity, which is what everyone is looking for.
No, it is like, I mean, if I use a tool to write an email or make a social media post, that's nice. But if I can make a dashboard with predictions or with like basis for my decision, strategic decision, that's not the tactical level, it's more like a strategic level. That's much higher quality. Yeah. Yeah.
Yeah. And I mean, these manufacturing and industrial, you answer a lot of, we call them RFPs in the US request for proposal. So if the government needs something that you sell, they'll put out an RFP and a bunch of organizations will bid on it and the government will select one, right? RFPs take an enormous amount of time.
to put together. And they're so stupid, but we have to do them because that's how the world works. But imagine a system, and we've built this for a customer. So this is a real world example. Imagine a system where the next time Fisher Industries gets an RFP, say from the government of Berlin, right? And they say, we need 10 of this thing that you sell.
Okay. The system can look at all of your historical RFP responses because they're always structured the same way. And it can fill out 90% of that RFP response based off of historical data and new inputs from the request. And so now you're spending five hours, make reading through things and making sure it flows appropriately instead of 500 hours putting together an RFP.
That is an enormous cost savings and an enormous business velocity driver we've never had access to, right? So these are the sorts of things. Now again, that can't happen successfully with chat GPT or Microsoft co-pilot. You're just not going to get the quality that you need. That's going to come from you standing up and owning your own Claude
It could be open AI enterprise if you're training your own. It's going to come with your own company's large language model, where you're going to be able to drive that stuff. I work at an agency, I have a small agency, and I got those. They come actually, funnily from the government of Berlin, there is one, and I put a little bit of stuff in. It's so boring to do that, to research that, the probabilities to get that,
I mean, there's 20, 30 agencies applying. So the chances are really low. I had actually a professor who said, never do that. It doesn't work from the numbers game and it doesn't work. But now making that automatic for an agency easy. But with the manufacturing, you need to have so much more from standardization to amount. Can you produce whatever?
that makes totally sense. And to have that you need to own your own data, I think that's a really, really important point, that you have to own LLM based on whatever. So how should I go there? I mean, you said open-air enterprise or something. Do I build one? Do I take an open source model? How do you do that? How do you handle that?
Yeah, there's a couple ways that you can go. For our customers, we provide the foundations for an LLM out of the box as part of our service with them, not saying that your subscribers should buy from us, because we're not.
We're not inexpensive, but, but if that you've got something. Yeah. Yeah. So we implement it. We manage it for the customer, that sort of thing. If you're going at it on your own, uh, I do think a privately hosted version of, of open AI could work fine. Right. The biggest thing you want to look for is that like the thing that, that, that just
I can't get over with chat GPT is that every request you make, it's sending it over the internet to be trained for the next chat GPT version to come out, everything, unless you have the enterprise version and it's closed off, but then you're paying quite a bit more money for that, right? Because they're basically being subsidized for not being able to use your data to train these models.
So an open AI, hosted privately, is a good example. My favorite, and we work with all the cloud providers out there. Our relationship with Amazon Web Services is the best. Out of all of them, they use a tool called Bedrock.
And bedrock allows you to run multiple LLMs at the same time on a common data set. So you can be training your next LLM while your current one is running, which I think is something that a lot of organizations aren't considering is, right? I knew of a group that spent something like $10 million training on chat GPT three.
Right? Nine months, 12 months, what else? A lot of money. And like two weeks after they released this and it was like, this is the best thing ever. Chad GPT-4 came out. And that made Chad GPT-3 look like a caveman. Like it just, it totally couldn't do it. But, but you can't just swap over and pretend like everything's going to work the same. There's new context. There's new affinities. Like you have to go through that quality control process again. So tools like bedrock.
will allow you to run three, and when four comes out, instead of going, oh my god, what have we done, you load four into bedrock and you can train it while your production is still running on three. And when you are convinced of the quality, then you can switch over.
So that's great. And obviously at Amazon Reinvent, Amazon's big conference last week, they did announce their own LLMs. They've historically been partnered with the folks at anthropic who make Claude, but they are rolling out their own LLMs called Nova.
that I'm very bullish on, Amazon is historically not the first to market for a lot of this new stuff, but the fact of the matter is something like 73% of the world's data environments run on AWS, the ones that are in the cloud. So they have the immediate user base and they have the immediate sort of market saturation every time they roll something out. So I do think we'll hear a lot more about that in the coming weeks or months, but
Yeah. Any one of those routes is a good way to start. But like we alluded to earlier, my biggest suggestion would be don't just go to chat.open.ai.com, sign up, and then start putting in company data. That's a really bad idea. Make sure that it's a dedicated LLM so you're not having that data being sent over the wire to open AI.
I think that is probably the most important thing you can tell the people out there don't just do that with all your data. Yeah, that is the easiest. It's the easiest, but it's very dangerous. Yeah, and easier we know how things are. And it's not that you as maybe a listener, you have your position and you know the risk, but it's not you. It's the people down the line somewhere that just they
Ah yeah, I want to write this. I want to have this and they put it in without you knowing about it. Yeah, and I think it's...
Yeah, yeah, yeah. That's how humans are. Yeah. And unquestionably, right? This hasn't really happened yet. Obviously, you hear some stuff in the news. But with all of these gen AI platforms coming online, it's a matter of time when we run into a gigantic ethics issue.
with these groups, right? Because terms of service today might say, no, we'll never process stuff with your data. It's totally safe. You can trust us. Well, okay, we all have enough people or actors in our life where we go like, okay, yeah, I don't believe you, right? So we're still so early on in this industry where again, the easy button, I just want to use this thing. And I'm reiterating this because I see this in companies.
Like, yeah, we just signed up for, for public chat GPT and we put ozels in there and we put machine data in there and customer data in there. Thank God. I haven't heard of anyone saying like, yeah, we put patients healthcare data in there because that would be insane. But eventually at some point in time, we're going to run into an ethics dilemma within the world of generative AI. We will. It has to happen.
As part of pushing the envelope of innovation, we will run into an ethics dilemma. The best way for you to safeguard yourself today is to just make sure that your data isn't publicly utilized in the domain of those organizations.
This is like, even if you trust all the players, I mean, they are not, they don't, they have the best, they have the best record. But even if you trust them, you know, don't know who takes them over or what happens. Maybe a big hack or whatever, then the data comes out. Who knows? Exactly. Yeah. Colin, I have to come to a personal thing. How do you personally use AI in your daily life business, private or so? I'm curious about that. Not the firm, but the private one.
I love it. Yeah. So I, my favorite thing is, um, I'm a, I'm a big, my, my father is the smartest person I know. He's got three master's degrees in electrical engineering. He helped, you know, he, he built pacemakers for Medtronic for many years and, and got patents on that. He helped build the Keirid coffee machine, which everybody knows about. Um, it wasn't a part of Keirid. He was part of a contract manufacturer that built it. So,
We don't have any patents for the Keirig. That would be nice. But I love to tinker with hardware. I love to tinker with hardware. And most related to your question, I love to tinker with the idea of leveraging edge computing and generative AI.
So how can we leverage AI at the edge on devices while it's syncing to the cloud, but not needing that full circle call and response with stuff? So I, like many people, used to trust my home security and my home automation and whatnot to a bunch of different vendors.
I'd have my security folks and I'd have my lighting folks and I'd have my access control and all that stuff. And maybe I only have this stuff because I'm a tech nerd. But eventually I got really sick of these systems not being able to talk to one another because that's terrible. And so I started tinkering with Raspberry Pi modules.
The new ones have ML chips in them and you can be running gen AI in the background. And so I'm about 80% of the way done building my own smart home suite, lights, blinds, perimeter detection, surveillance, access control, you know, press one button and everything in the house shuts off.
It's been a super fun experiment. It's made me feel very uneducated at times. I've never soldered anything in my life up to this point. And, you know, it's been great. But what that's allowed us to think about for North Labs is what's next for us with AI and physical hardware, right? So we talked earlier about these domains.
that we have. Well, all of our domains today are software data related, right? But what about asset tracking within a manufacturer? So I know that this piece of steel moves through the line here and comes up this side, or predictive maintenance, because software feeds from a piece of machinery can't tell you if it's getting too hot or if it's vibrating.
And that's typically a telltale sign that a machine is going to fail. So how might we leverage hardware sensors to serve as the vibration pressure temperature sensors and have machine learning at the edge to where it can take action if needed in the event of an emergency that can be feeding back
to that centralized system where all of our other software data is going. So now we have a very complete unified view of the business. So that's something that, you know, it's
My team doesn't have a directive for me to go and start building a bunch of hardware, but it is something that we're looking at is what's that next frontier to bring even more comprehensive solutions to our customers much faster at a fraction of the cost and allow them to increase their operational excellence that like really we're in business together to do.
Oh, that's interesting. It's really interesting. And it segues perfectly to my last question, the terminator scenario. How probably do you think a terminator scenario would be? And you are already working on the hardware.
Yeah, like a big brother or an eye in the sky. Look, I think, I mean, there's no question that there are already countries around the world who are implementing this sort of thing. And certain states in the US are receiving a lot of flack for beginning to implement this stuff, the facial recognition and all that stuff. And obviously the Chinese government has their social credit scores and things like that. Will that increase? I hope not.
What is the likelihood there that it will?
Certainly. I will never be a doomsdayer. I've been in a lot of very bad situations in my life in the military and the global conflict that's going on around the world and playing a small role in that. I would much rather like to believe that technological innovation and adoption is going to help curb
a lot of those very bad things going on in the world as opposed to taking a good situation and making it awful. So that's always the viewpoint that I'm going to cautiously maintain just because I can't afford any more gray hair thinking about what could be in a worst case scenario, but certainly.
I mean, we're going to see events in the future that somebody acted poorly and, you know, we used, you know, I'm using the term weaponizing. I don't mean physical weapons necessarily, but weaponizing these capabilities, certainly fraudsters and, you know, quantum computing is just around the corner. So now we're going to marry generative AI and quantum computing together. And what's that going to create? I mean, that is, that's a whole different
peak for us to climb as a society. I mean, there's just, there's a lot of things that are going to, you know, the next 10, 20 years are going to be interesting, to say the least. Great. That is also, yeah, realistic, but optimistic answer. I love that. And, but Colin, do tell us where can people find you? The firm, you personally reckon they connect to you.
Sure. So the company is northlabs.com. Check us out. Our website is in need of a little facelift with all the stuff we have going on, but that's where you'll find us. I'm not on any social media except for LinkedIn. I cut my social media ties a long time ago and it's been amazing, but linkedin.com slash in slash Colin Graves. Colin with two Ls, Graves like dead people.
You can connect with me there, shoot me a note saying, you know, you love listening to DMR's podcast and, you know, we'll talk about AI. I will put everything in the show notes so you don't have to go back with your audio app and note everything. You can just click there and find that. And the last thing is Colin, thank you for being on the podcast. I learned a lot and it was really interesting. And yeah, let's see what the future brings and thank you.
I absolutely love being here. Thanks for the conversation. So I think we learned a lot. And the most important thing is foundations first. You have to have good foundations for your data to use your AI. And then you have to have a process to use this data to get really something out of your AI.
I think that's really important if you really implement AI and it's not just a hobby or if you want to really go and make revenue save time and yeah, make AI really useful for your firm. So, thanks again Colin for the interview and all the people out there, don't forget to go to agrobulin.com slash newsletter to subscribe for a newsletter and get all the podcast episodes in your mailbox.
Also subscribe to the newsletter in your podcast app. Yeah, and hope to have you back in the next episode. Signing off Didmar from Argo Berlin.