Raising Expectations: The Evolution of AI Accessibility with Ryan Cunningham
Aqeel: I am pretty curious because we've hung out and jammed a few times but I don't know the entirety of your life story. And I'm happy I wanna get into maybe a little bit of the psyche, but like you do have a quite impressive background with the tech industry.
Ryan: Yeah. Just right place, right time, and just a lot of luck, honestly.
It only looks like it made sense in retrospect. In my opinion, like my path is pretty circuitous. I started as a banker in a Credit Suisse in the technology investment banking department. And I graduated from Georgetown with a degree in finance and econ, but I was an avid programmer on the side just because it was fun for me.
Like I wa I was, I learned Python actually inspired a lot by, Andrew ING's AI classes that he put up in 2011 while I was still studying finance and econ. And that inspired me to, learn programming create a lot of my own apps like Twitter bots, we were talking about, some of those earlier.
And then in college I had the Dogecoin that just came out as like a solid meme. And for fun I got these raspberry pies for Christmas and so I linked them all together to form like a little mini supercomputer with horrible processing power. But just with mining dogecoin on this like four clustered rig of raspberry pies.
Just for fun. Like I didn't wanna make any money off of it and I was spending more on electricity cuz you know, these things are terrible. But but it was a fun exercise and that's the kind of stuff that I was really enjoying even while I was still a banker. Like just, these fun little hacky side projects.
Honing my my tech skills in a way, even on the job as much as possible. Like at the bank I was writing a lot of VBA and macros to automate a lot of my workflows. Because I was just so tired of spending weekends or long hours just moving stuff around in spreadsheets and and not getting anywhere with it.
I just wanted to click a button and do everything and that'd be it. Yeah.
Aqeel: So did you spend a lot of time like learning how to learn? Because imagine all of this stuff is gonna be, you're in autodidact, you have to like, understand what's going on with the routing of information between the signal from the Raspberry Pi Yeah.
On words. So like, how'd you, did you get any time in that meta space? Cuz largely these are all gonna be unique problems to solve. I'm curious, you
Ryan: have like frameworks you've set up over time? No, I'd be lying if I if I told you that I did I read a lot of books from folks that were AutoD acts as well, just trying to like, show this is how they, approach these things and the academic approach, and I don't mean that in a bad way, but like the academic, like top down approach of trying to take a framework.
And then applying that to this new problem that you're trying to solve doesn't really jive with me historically. I just prefer to, I'm gonna try something. If it doesn't work, I'll move on to the next thing. Or I'll just try to find either the person or the resource that can help me unblock and get back into the flow state that I want to be in.
And so that, that's been consistent through through all of my career. And, after banking I really wanted to see that in action at at early stage startups. Because once you get to the stage where you go to the public markets, it's all pretty cookie cutter. But to really see like what, in the trenches, like in fast paced competitive environments, it would take to win.
And to get to that stage to be able to, I p o that's what I was really intrigued by. And I was, lucky enough, your fortuitous connection from college to get a referral to join Uber Eats. Right back when it was just kicking off. They had found product market fit in 2016 going into 2017 cuz they changed their model.
And when I was there as the first product analyst for the team we went from a dozen or so cities into 300 all over the world in like the first year. So just massive scaling and having to tie together a lot of our, data systems and infrastructure to be able to inform our product and operations teams.
Okay, this is what those policies are doing. This is how much money we're losing, we need to move here. These other competitors are moving in. That was really fun. So that continued through, working at the moonshot division at Uber Uber Elevate, so flying cars and delivery drones can get into more of that if you want.
And the micro mobility team as well. So bikes and scooters. For those two. That's when I started to work with AI teams a lot more. So get up to the point where I'm like, all right, I know enough Python and SQL to be dangerous, and now I'm interested in more, machine learning workflows.
Like how do I, automate a lot of the stuff, even more like I was doing with banking and then working with really talented research scientists and MLEs to build these excellent spacer tempa models that we productize was really invigorated. And that carried through into the startup that I joined after Uber.
Which, they eventually got acquired by Reddit, which was, great outcome for them. Very proud of them. Spike Trap. And so now I'm just, they were looking for someone that had, at AI fund, they were looking for someone who had banking high growth tech small startup experience.
So comfortable with ambiguity, knows all the lingo, but knows how to model like financially and understands business. But also, wouldn't it be great if they had AI e expertise or experience as well, and I was like, are, this is highly specific and also right on target. Let's do this. So that was two years ago almost to the date Brian shaped role.
Aqeel: Yeah. I'm curious. And so in 2010s we saw a lot of B2B SaaS doing well. Yeah. We saw a lot of what was coming out of all the incubator programs we saw getting funded heavily, things of this nature. And we're putting web three as like an anomaly there for a second. Sure. But what was exciting was you got maybe some discord in Slack.
And then you maybe also got but then you also had things like delivery services. Who reads and DoorDash where like these ones that
Ryan: became
Aqeel: large valuations, I guess there's a Airbnb too, right? So you sort a very small handful of consumer apps that went on to become giant. Correct.
Yeah. And evaluation. We're not talking about profitability. Spotify too. But. No, like the volume, what we can see right now. And so the problem I'm drawing to 2023 and this 2015 era I wanna ask you about is this blue ocean, because there's a complete novel technology.
The reason why you can go three cities to 300 is because wow, this is a whole new revamp on like how folks have traditionally been getting goods and services. Even if it's, the particular first thing is Yeah. Food like restaurants, it's new economies, new jobs created new bus business models and the leverage of software be like replicated at marginally no cost.
From a
Ryan: software perspective at
Aqeel: scale. And so how was the environment and culture? Were you all navigating on the Uber Eats team and when you joined 2016 between there's the grand yields vision and there's the chaos day-to-day operations, how were you looking at like prioritization frameworks and like finding those leverage points?
To work on problems. Yeah. And again, there's parallels to be drawn here in GT Wrapper API wrappers in 2023 cuz it's get to market with API calls, carve the market out. Yeah. And they can do some novel tech when you're in new ecosystems through AI enabled products. I'm curious
Ryan: about that timing.
So that era in the 2010s the market was flush with cheap capital. And and firms were using that cheap capital to land grab and buy as many customers as they could. With the understanding that hopefully we'll price it at this level that we think we can get to with automation and better processes over time.
And let's hope that we ride this out long enough such that we actually get there. And at at Uber, this was obviously like a very, like strong competitive environment, even internally, like we all wanted to win and you really can't spell culture without cults and honestly felt that way in a good way.
The value of network effects and economies of scale, that was king. You needed to make sure that you had enough supply such that the one metric that almost everybody cared about would never go down, which was unfulfilled. Unfulfilled being you request something that's there and then it doesn't show up or you wait too long and then someone abandons like that.
It was very clear from the data that we had from the ride side, that was the, basically the one thing that if it ever went down, that was just, that was the worst thing that could happen. So all the GMs were super focused on that metric and that guided focus. It also meant that it incentivized and encouraged a lot more spend on supply acquisition than we might have needed.
Far beyond the point of of diminishing returns. You, we were, In many markets, we had acquired too many and we could cut back on spending a little bit and not see it drop in, our fulfillment rates. But to even recommend that was almost sacrilegious internally. And so it was trying to now go from this growth at all cost mindset, which had been the mentality for the entire market for a long time to then go into this intelligent spend, this intelligent acquisition whatever you wanna call it in Andrew Chen's book.
It's kinda once you pass the inflection point and you start to try to level off in the maturity of your marketplace that's what we were trying to navigate to. And that required a lot of a lot of prioritization and a lot of change management. The prioritization thing, what you could always count on was saying, how much is this gonna cost?
How can you look at the unit economic breakdown on a per trip basis, and then say, how is this decision going to impact our margins? And the team was laser focused on trying to get to EBIT positive. Not because we were under pressure to achieve profitability, but because we believed that we could.
And we did for a very brief moment because a lot of the policies we were recommending and observing got us there. But then almost as soon as we did DoorDash raised 500 million from SoftBank and then launched a totally different strategy to take up a lot of more market share in the suburbs. So very exciting, very competitive environment.
So at
Aqeel: that point in time when you're it's just this Olympic race on capturing the market. Yep. There is a. Role and venture that was essentially you could use these hard costs you're taking to just acquire as much of the market of that possible. Correct. So you could get into a little bit more ability or fight at that point.
Yeah. Just for the sh sake of like market share,
Ryan: because at the end of the day, all of these businesses were selling commoditized products. You were selling basically food access through your marketplace, but you could just as well go somewhere else and also get a pita wrap. You were selling rides, but you could just as well go on some other app and go from A to B with some exceptions.
In a commoditized marketplace that really was the name of the game was was spent. Yeah.
Aqeel: So for sharks and minnows example for sake right now, so again, there's a lot of blue ocean, like any sort of AI enabled use case these days and there's dollars to
Ryan: be gained and it's also a very different kind of business.
Exactly. Yeah. I emphasize the commoditized marketplace point a second ago because Because there were, there's a large graveyard of failed smaller startups. That wanted to just own a small piece of that pie but quickly realized that it's a all or nothing game. And the market is really only gonna have appetite for at most, probably about two winners one far in the lead, and the other maybe not so far behind.
I don't really see that to be the case with the current yeah. With the current space. With some exceptions, like if you break out artificial intelligence right now into different layers of use case, so you know, the application layer expertise layer infrastructure layer, foundation model layer the lower you go, the more that commoditization rears of tuckley head once again, like at the end of the day.
It is a language model api. It is a speech to text api, and all that really matters once you cross some threshold of acceptability from a market readiness perspective is cost. So if you were to, and I've done this, but if you were like, break out all of the per, per inference or like per whatever your operation is, API costs for say, the diffusion models that are out there, there's like clear, clear winners in terms of how much they're charging, like mid journey, and then there's a long tail of smaller folks that are cheaper right now, but do they actually have the quality that that people have come to expect and and demand the jury is still out?
So for the more commoditized it is, the more that allegory of the 2010s matters. Once you get further up into, we're talking about email assistance, we're talking about digital avatars. That's, there's much more, small and mid-sized business opportunities. I think for that, that what, where that matters is the verticalization of the use case.
Like Bing is not going to be able to have a, not that they're not able to, but they're not going to most likely have a chat GBT plugin to the Edge browser that does everything you want to for browser-based productivity. They'll do a lot of things well and probably for most general use cases it'll be great.
Yeah. There's gonna be some very specialized use cases where your empathy for the user experience and your your laser focus on a single problem, not necessarily a platform, but a single problem to solve makes the difference.
Aqeel: Yeah, this is a good point. And so as I was talking about, there's not too many consumer unicorns out there that came out in 2010s.
We did see fair share of. Chrome extension's doing well. Just a little widget that's, everyone's already on this platform, everyone's already using this thing and let me just do this thing at, right click as a functionality, but I'm trying to read a certain kind of text. Or this could just bookmark into some companies I did pretty well with a very lean team are things like, and this is pre not AI related, right? Sure. Things like read wise, like just bookmark to read later for a ui. Cause
Ryan: or pocket Yeah. Or other things like that to do is as well.
Exactly. They
Aqeel: know consumer behavior. Just open a new tab really quickly and then batch it later on. And then when the folks feel like it's nice and bookmarked in one spot. Yeah. It's the empathy for the user to know that, wait. So Twitter's bookmarks. There's a whole joke in Twitter when you bookmark everything, you'll never go into your bookmarks.
Read Levi solve
Ryan: this. Have you ever tried saving anything on LinkedIn either? LinkedIn posts, same thing. Yeah, same problem.
Aqeel: It is just a nice little graveyard. If anything, it's like a little, for me, it's like the equivalent of a little upvote for the algorithm. They'll know that. Yeah. So it's like another way to support this person
Ryan: in my brain.
So you show, you're like, I'm doing my part. Like I'll save this even though I'll never see it
Aqeel: again. Yeah. Sometimes I'm like, oh, this is like super relevant to that idea that had that one time. I come back to when I'm like in a space where I can just go through my history of my life and write a manifesto.
And I'm just gonna go through all the things I was reading at the time or save and I'll track my digital footprint through my saves. That's my, I do it. Will I get there? I dunno. We'll
Ryan: see. We'll see. Yeah. But to, to your point though, like the reason why you like read Wise and, we mentioned Pocket and some others is one of the reasons at least, as I'm interpreting is that.
It gets what you want done very quickly. And all that you wanted to do was you wanted to bookmark something and then, perhaps view it later. Maybe there's clustering that happens on the back end so that it clusters, relevant bookmarks and tabs into topic areas that you can, that you can extract information from later on.
But you're not active in that process. All that you wanted to do was, you said, this is interesting. I wanna read it later. Two keystrokes and it's done. So whatever. As you're building like AI apps that are hyper, hyper-specific to, these use cases, whatever you can do to to reduce to reduce any interruption to the flow state is what I've been instructing people in terms of UX differentiation a hundred percent.
Raycast is one of my, favorite things right now because it's, completely changed the way that I. Interact with my laptop. Like I replace my spotlight search. With it, it's very customizable. I can summarize a YouTube video with just like four or five keystrokes.
And I don't even have to watch the whole thing. Now I have, the gist of what was being discussed. It's a launcher for all of the apps that I'm working on and it's all, I never have to touch my mouse. Even going back to my phone feels primitive now in terms of the UX because I stay in that flow state of productivity much easier with this application.
So anything that can, that gets out of the way of the user. Cuz AI is not the product. AI is an enabler. It's a means
Aqeel: Yeah. Is why I love the most. Absolutely. And it's an interesting concept here where like, how quickly fast we just adjust to these things, these conveniences our
Ryan: lives.
Yeah.
Aqeel: There's all this complaints like chat GBT came out and then when it had its downtime at the very beginning how fast it grew, there was folks saying, I don't even know how to write an email anymore. It's, that's what you want. You want the folks that these folks are literally adapted so that they like, are reliant on this as part of their workflow or I would be.
This is why I like the micro tooling application layer. Yeah. Where I like these eight figure pockets, and at that point in time, if you can just know your audience, they probably congregate somewhere digitally some sort of thing, and you can just post in there have ambassadorships and do some sort of referral schema.
But yeah, I think there's all these like prompt cookbook type things and I'm like there's forums and Facebook groups and discords and just like prompt engineering, like nerds. Yeah. That's a great place to put your little prompt. Yeah. Whatever it might be. And that's not the biggest audience on the planet, but some folks might really like your thing.
Oh sure. And you can figure out what, what's silly to them to have. A
Ryan: freemium model and it's a ton of fun. Yeah. What, I get discouraged by, and this happens, I'm more optimistic than ever, but what sometimes happens though, when I'm talking to entrepreneurs at, these events like, cerebral Valley and other things with Hugging and Face Event a while back they may come up to me and say Hey I built this this app, and it's basically an API wrap, or we'll call it an app. And it does this one thing. And I'm like, great, that's cool. Like, how long does this take you to build? He's oh, I just threw this together and, a day or two.
I was like, that's amazing. I'm glad that, the accessibility is so much so much better now. Is that, do you think that I'm on the right track here in terms of, a unicorn potential startup and they're like, that's kinda the wrong question to be asking. Cuz if it took you 12 hours to make.
It's gonna take anybody else the same amount. All that's happened is that the baseline expectations for what you can do with this technology have gone up. So there is an element of FOMO that exists, if you of missing out in the entrepreneurial community at the moment. But it's causing more anxiety than I think there needs to be.
Like you and I, we talked a lot about say don't try to put all of your eggs in just saying, oh, I need this next thing to be unicorn. But instead do exactly what we were saying before, go into these communities and just build stuff that people really like. And try not to put too much pressure on it.
Just try to solve hard problems. And eventually you're gonna come across something that a lot of people share the same, the same feelings about. Yeah,
Aqeel: exactly. And that message for me is those are those independent hack, right? Like those folks who are just like bumping it, whatever they need to do to get to SF or just playing hooky from university wherever they're at in the world.
And it's oh man, this isn't good on AI right now. And then, yeah, there's like the ability to hack fast and build products, and so you can react very quickly to the market and things of that nature, and also implement your ideas so that you have a working demo. That I think is getting to the point where it's almost giving be instantaneous.
Like I think, again, we're asymptotically approaching as you think it exists. That's like my wonderful thing of what you'd love to get into a little bit later on down the road to get a little more concrete advice stuff here from your brain. Yeah, sure. AI in our world. What if it's you think it ends there or it does the thinking for you. What does that imply by human day to day life? But that being said yeah. I think entrepreneurship is a separate skill. From being able to build product Yeah. Entrepreneurship. That was like, yeah this empathy, this costs to market. The distribution,
Ryan: the strategy that being able to ask questions in research that don't bias the the reaction that you're gonna get from prospective customers or even experts it, it's such a.
A common error that people make where they go out trying to sell, what they've already built when what they should be doing is active listening. Yeah. And there are ways in which you can maybe use an LLM to learn how to be better at that. But but it is an acquired skill to be able to engage with someone in a very empathetic active conversation where you're hearing out what their problems are rather than trying to, say, Hey, I built this thing.
Do you like it? Will you will you buy it? Where there's a time for that, but, not upfront.
Aqeel: Yeah, absolutely. I am curious if you have like explicit things, people can look up books, conscience folks to follow on, like
Ryan: the mom test? Yeah. It's just the mom test. Dot com is the website, you can download the pdf.
It's. A really excellent repository of of questions that you should be asking and, questions that you might want to ask, but then, say this, not that instead. Yeah. They're really
Aqeel: good at reducing like the human biases we'll have of plainness. I think that's huge.
Don't ask your friends and family again. That was just a very simple thing there. I'm like, let the thing stand alone, just
Ryan: exist and see if people play with your widget. Exactly. Or
Aqeel: Give you a feedback in a way that's anonymized or so take away as much human bias as possible. So it's do people genuinely appreciate this thing?
Ryan: Very few people are willing to just be give you radical candor to the point of almost seeming, rude. Like you're, you're shooting down their idea. Nobody really, nobody wants to do that by default. At least not most of the people that I've met. So we do have repeated position to being polite, which can get in the way of finding PMF if you're not careful.
Aqeel: Yeah. So there's two things there they know. So on the entrepreneur stuff, it's You are coming at a new idea from this internal passion that's innate to you. So all the other crap like ops, legal, hiring, cash flow you'll muster through because you have so much passion for this thing.
Versus you
Ryan: got, I think it'll make
Aqeel: cash. And is like a get in and get it out concept. I think entrepreneurs right now are being able to do both. Yeah. I guess I'm pretty curious. I'm just like, what might what advice you have for these kinds of entrepreneurs right now because there, there is incredible anxiety and fomo and everyone feels like, okay, great, and we're gonna build an API right now, or application off an API right now on this thing.
In weekends, which frankly is true. Where I want to come to this is more so like asking from a more of a grounded position of an investor has to think about multiple time horizons. Yes. And there's the traditional training on is this venture scalable?
Ryan: Is why
Aqeel: this person, some other, some of the questions there, like cashflow and stuff like that.
What's the actual go to market plan and Yeah. Scaling plan here. But like more so from the investor perspective, what are the two things you're seeing as they're getting pitched at with all this early stage ideas, whereas okay, there's okay, here's a niche like Ryan, we figured out that I dunno, folks don't read their bookmarks anymore.
So I want to get them like an API wrapper that I read their LinkedIn bookmarks, cannabis, that language, and then send them a text message of a synopsis like once a day or
Ryan: something. Yeah. Ask me that one up. I think I've actually heard that idea.
Aqeel: Yeah. From, I think we've seen like
Ryan: 2000 plus.
I think. I think so. There's some great folks in serial Valley building some really cool stuff, I think it all boils down, frankly, to to defensibility. If I were to, have write down like my list of. Of, criticisms of most pitches or conversations that I've had with entrepreneurs, it comes down to, they haven't yet thought through what makes this particular product or this vision highly defensible against I guess a litany of competitors.
And there's a, there's several ways in which in which I've tried to, push folks in the direction of thinking about it. So the first is and this is a no particular order, but the first is ux. We talked about that before. But the headline is just is just do whatever you can to get folks back into flow state.
The AI is not the product. The AI is an enabler to to fixing the problem as fast as possible. So if you can get that down to to say just a keystroke or even or even predicting it ahead of time. And great. But as soon as you get folks to try to like, interact with your UI in order to get a very specific prompt and they're going through a bunch of different iterations that are taking them away from the thing they wanna be doing, then you know you lost the game.
So that's bad ux. And and a lot of folks that are really excited about the technical complexity and myself included, sometimes lose sight of that. So user experience is tremendously important. The second is is your data mode. So there are existing remember, web two companies for them thinking about oh, what if we do web three?
They're existing like non-AI adopters right now. They're thinking, oh, how do we use our own proprietary data? Use call centers as an example in order to become sort of an AI first company can we build a better l m than, what exists from from foundation models? And so we see some pitches like that where, folks will come to us and say, we, we've, got this amount of experience we think can train a better, automatic speech recognition model or something else.
And the question that I have to ask is, what is the marginal value of your proprietary data set, gigs, terabytes, whatever it is on top of what these LLMs or what these large models have already been trained on? Are you just providing extra text and is this text that there probably already exists a lot in that cluster and vector space that these models are trained on?
And whatever you have is probably not going to be marginally that much better when you fine tune a foundation model on it. But if you have multimodal data where say you have combination of text and images or text and video, which leads certain outcomes there's not as much of that on. The web.
There are some places that have, that have that, but it may be, hyper specialized to a given vertical. I think maybe Reddit pre, 2022 is also, probably a lot of the reason why they're fumbling the bag on this like API change that they're doing. But point being, the access to multimodal data may end up having higher marginal value than just having unimodal data, whether it's proprietary text, proprietary images, audio, et cetera.
Like anyone who's trying to build a custom like custom speech recognition model right now, for instance, blown outta the water by whisper, like it, it doesn't really matter anymore. The marginal value is too low. That's data. And then the last one that was it difficulty level of difficulty or level of complexity of the stack that you've created?
So data's one part of it, but Say, do you have a just an off the shelf transformer that you're using and you haven't really done any fine tuning or or extra instruction or anything else on that? Or do you have some sort of custom architecture that would be prohibitively difficult for somebody else to Oh, yeah.
Yeah. To put together. Yeah, exactly. This is more of
Aqeel: like the complexities of what you're trying to deliver. Yeah. Like how many parts are there that, the thing would itself.
Ryan: Okay. Exactly. That's very interesting.
Aqeel: Okay. I've heard it explained that way
Ryan: before, but that's true. It dovetails with the multimodal piece.
That's more of, a jigsaw than it is just like one one image.
Aqeel: Oh. I've thought of it, of like how folks are doing prompt training and how they're gathering context for different steps in
Ryan: workflows. That's also a piece of it. Yeah. For say text. Yeah. Yeah. I saw a great demo from from Cerebral Valley last week that was, Using a vision transformer to totally simplify the way that video editing was being done.
And, I don't want to, don't want to Celia Ice Thunder. But it was just, a tremendous example of using a a vision transformer that they had done some custom things to that made it way better than whatever was off the shelf to then take 10 hours of footage and then within a minute, extract it within the same plugin, like within, premiere Pro or Final Cut Pro or whatever video editing app you're using.
And then have that be editable clips, some of which were already Yeah. Taken out. And you can put back in if you want to, but the app, the AI is saying, we recommend you delete these, and you cut together a super cut of of the whole thing. Yeah. That was tremendous. And it was a really good example of.
Great ux, like you didn't have to download a brand new mp4. And then Yeah. Cut it yourself. It just happened in Yeah. The tool that you're accustomed to the data that they had pre-trained their own custom transformer on there's a lot of that and it's, it's video so it wasn't multimodal necessarily, but video's a little harder.
And then the stuff that they had done to the model was very difficult for, Adobe or somebody else to come in and say, we're gonna do exactly that. Yeah. And I really like that.
Aqeel: So that's our first group entrepreneur here, cuz he's awesome. Great character. And he's particularly passionate and he's been doing this stuff for eight to nine years in cinematography and video editing.
Anyway, so he built that knowing that this is the software everybody uses, right? He's building something, it's like the equivalent of a plugin or something of that nature in these editing softwares where you're getting that power user audience of like always editing a video, including their time and that's their profession, right?
Yeah. At this point in time. That's the right concept where you have the first type where it's yes, there's money to be made cause there's incredible value there, but this person knows that space well, has experience. What are you seeing that some of these folks are I'm sure you've had pitches from, or just observe folks and see folks who are like, Hey, I've been a lawyer for 15 years.
Can I use this stuff since our idea or something like this? And Yeah. Yeah. I'm curious about painting some color on, for those folks who are particularly passionate to have a product like I was doing the nice overlap of both. Yeah. Hacking fast and domain expertise. Probably clientele connections as well
Ryan: And a network.
So this relates to the expert layer that, that I mentioned earlier when we say, okay, application expert infra foundation model. Expert isn't really mentioned very much in the same market maps or anything like that. But this is alluding to exactly what youre pointing out here.
Yeah. Lawyer for 15 years, accountant for 25 years. Someone who. Is who has developed a specialization in a field that requires a high degree of education and expertise in order to do it correctly and for which the cost of doing it wrong. The cost of an error is actually rather high. You lose a case, you make some sort of accounting error.
There's a lot that, lock that can go wrong there. Versus you write some marketing copy that doesn't, hit as well. It's, no harm, no foul. So for those use cases that, require a greater degree of expertise in the loop I'm very optimistic and very passionate about the augmentation capabilities that AI assistance actually bring to those workflows.
Like what Harvey has done, so Harvey is, having came out of the open ai front of their seed round, and they have an existing partnership with Allen and Ovary, which is like a top law firm. Where they're basically shadowing, as far as I understand it, I believe they're shadowing lawyers and paralegals on a lot of legal tasks, whether it's research case lookups summarization, et cetera.
And fine tuning a much better expert version of this model versus whatever you would get from a base. Lm did you see the the lawyer that wrote hi his case entirely in Chad, G b t and got roasted over the coals roasted yeah. Oh my God. East Coast? Yeah. Yeah. Just completely hallucinated cases that don't exist.
The judge gave him a week to find this stuff and the best lawyer could come up with when he was finally, up on the up on the bench was like, I thought it was a search engine. And someone coming in and having. Those expectations, oh, I'll just use this now.
Yeah. It doesn't work. In those cases where you do require a lot of expertise in order to do it, so instead of treating it as a replacement, treating it like an intern that you can train to do the job very well, and maybe sometimes they're drunk, so we'll get it wrong. But as long as the human in the loop, you'll contain those instances is the right strategy and that's, the case for legal, for accounting for very high domain expertise feels like pharmaceutical research and so on.
Yeah, a lot of augmentation happening there. Yeah.
Aqeel: And you've already touched on this, know, I'm aware, but I also want to also bring to light another thing in terms of data is like the usefulness of the information being transmitted here. And like Harvey's talking about this one great thing is just that rhf component and your design as a provider.
So get to the market right now. And is your end user. Giving you feedback on what the outputs are. That's great data, I think for entrepreneur. Cause they understand fundamentally the entrepreneurship skillset of how do I make this product better. And then also like, how am I gonna put this back into my product, which is AI and ai.
Absolutely. And I think that's that's also a really big one here that folks forget. It's not just like data is your mouth.
Ryan: It's just like you can create proprietary data for instruction that makes, that makes your version a lot better. Yeah. That is crowdsourced from
Aqeel: your end users.
Exactly. Which is the best part about it. That's why it's nice to work with those 50 data a few hundred for sure. And it's it only takes a month too to start implementing that and make it very go from 80, 90 percentile to like up higher 90 percentile.
Ryan: So it works in the expert cases for sure.
And that's why having those partnerships with, like early adopter central figures in whatever you're going after is tremendously important to execution. But it also exists in the consumer, like less defensible side mid journey, for example. When mid journey, I think and I'm probably getting the timeline wrong, so someone's gonna, someone's gonna out me in the comments.
But I think that, stability came out with Dream Studio probably a little bit before before I had heard of Mid Journey. And so I had been playing around with Dream Studio, like the web app for for a little bit. And it was fine, but there wasn't really like a way for me to provide good feedback beyond a thumbs up or thumbs down and like some short text as to why.
Whereas Mid Journey, which had this clunky interface on Discord that people were digging on a little bit ended up actually being, complete Galaxy Brains because the interface on Discord. Provided perfect telemetry for how people were were rating the diffused outputs that the model was making.
So you click a button that says, oh, I like this, gimme more variations of that. Or, this is great upscale that you're not getting free form text that you're having to parse. You're just getting straight up user interactions, which are then creating an instruction data set to create better and better versions of whatever open source model you started with.
So by the time stability got their act together and modified their UI to, resemble that mid journey got far and away in the minds of most users. This is the best text image Exactly. Model on the on the market right now. And you get
Aqeel: this away to understand what was the user going for by typing in this query, this prompt.
Yes. And then you can iterate towards like enough but few options. So you can, it's a machine. You can train a machine and wipe Yes. And that's beautiful. So that is like a great segue into the last question of this topic you wanted to bring up was differentiation versus defensibility.
Yeah. And so we had two, two Harvey ais. Some someone wants to do this. What, how would you talk differentiation? There's not anyone investor now. I'm just making not hypothesis. Like someone's like trying
No,
Ryan: I'm not saying there's not anything stopping, I don't know Harvey Dent.
There's not sound like a Dent ai from going to Latham and Watkins or some other law firm and saying Hey, don't you wanna be on this too? Yeah. I imagine that there are a lot of partnerships that are cropping up for that and open AI and Bain are doing that as well by formalizing a partnership to like embed open AI's models into Fortune 100 s like Coca-Cola.
So certainly nothing stopping folks from doing that, but getting access to a very valuable expert partner is, is crucial. Yeah. If you're just building something on the outside and then trying to, you don't have that partnership in place, you, it's a lot tougher.
Aqeel: Yeah.
So it is differentiation, a subset of defensibility in this case. So where it's just okay, every, all the other four check like the boxes here in terms of they're an expert, this is the application, et cetera, et cetera. The data, how it's set up for human feedback, all that might be the same.
The difference is this person's got the market relationships. Or the network effects. Maybe it's like the alumni of this college that has a lot of law firms versus someone who doesn't have a JD even is like an example of differentiation. Yeah. I'd say that would be, yeah. I'm sure an investor perspective, like how are you thinking about, why is this over the other five folks who pitch you the same idea this
Ryan: quarter or something.
Sometimes I think people I'll throw myself in included, may use the terms interchangeably. Even though they mean very different things. It's kinda like a square and a rectangle. So in the example that you gave, you have two, if Heart AI, dent ai, one of them CEO happens to have a jd the other one does not.
Just hypothetically. You could say that, this one's differentiated because of the level of expertise of the founding team. But that in and of itself doesn't mean that their products more defensible versus dent ai just having better execution and then completely overtaking what they're doing.
Versus if you had one that specialized in IP law and another specialized in, I don't know, something that was so far removed from IP law that it was highly unlikely that one would encroach upon the other. Then, it's, they start to mean more similar things. Like they're different and they probably won't bump up into each other.
Aqeel: Brilliant. Brilliant. So that was good to know from I guess traditional investor backing some thoughts there in terms of instrument or, but do wanna get into the venture studio conversation?
Ryan: Yeah, we it into that for sure. I think, AI fund, which I've been working out for about two years now they first started in like late 2017 Andrew being raised 175 mil from, a lot of really high quality LPs and strategics.
And the intention was for us to, instead of just deploying capital into a lot of early stage companies was to say Hey, we're horizontal specialists in that we know how to apply and productize ai but we wanna work with. Vertical specialists that that have a much deeper familiarity with the problem or empathy with the customer who can tell us precisely yes, this is a very strong pain point.
I would do anything for this to be fixed today. And then it's on us to say, all right how do we collaborate to build something that uses ai, but just gets outta the way as soon as possible so you can get back to being in flow, whatever it is that you were doing. And that, that's been, it's been happening for almost six years.
We've changed our model a little bit over, over that time, but what we've, converged on is, has been very effective. Yeah.
Aqeel: This makes sense. Could you model in like, how might one expected terms and the day-to-day be a little bit different working with a venture studio versus just raising
Ryan: traditional venture or angel checks?
Yeah. E every. Every studio's gonna do it a little bit differently. But for the most part studios are gonna be very hands-on. Like they'll like we aspire to play the role of a minor co-founder with the companies that we build, like we hire, the founder and residence, the to be ceo.
But the intention that he or she will run the show like they, once this company launches, it's their baby. Because, you can de-risk almost, as much as you can on the idea, on the market, on the tech. And we, we do a lot in that. So we earn our keep, but ultimately the success of of a startup, especially at an early stage is made or broken with the quality of the founder.
We recognize that we totally abide by that. And the, we do as much as we can to, to. Reflect that in the terms of the cap table considerations that we get to, like we're, we're minor, we're here to help and we are earning our keep because we're on, we're on common shares, like we're on the cap table.
But we're not your average vc and cuz everyone else is gonna, just give you money and maybe they say, oh, we got connections, we'll give you connections, or you got the logo. But we actually deliver the ml expertise, the ML development we deliver the the prototyping, the actual recruiting of your Scrum team, of any technical co-founders that you need.
It's very hands on. This,
Aqeel: Job funds have called portfolio operations. Where you just like wearing the hat of I'm gonna be a generalist and help everything happen. Is that a little bit of your job?
Ryan: I, yeah, I would say that my job is like half. Product manager and half investor.
Investor. Yeah. Because I'm also like taking ideas in from top of funnel and then doing the, VC stuff before saying like, all right, is this, what does a competitive landscape look like? Is this a business that makes sense? Is it venture scalable? Is this a problem where it's solving basically that's the investor hat.
And then once we're like, oh, I want to build this, then, I go right into PM mode and I'm saying, all right, we got, we got six sprints ahead of us. All right, what are we gonna get done by when? Yeah. And it all accumulates into, the pitch to saying, we wanna fund this. Here's the strategy, here's how we're gonna hit it.
Here are the partnerships we're already establishing. And then it's off the raises.
Aqeel: You all have moved earlier than some other firms on just like identifying that there's like things use cases, infrastructure aid, and at the application layer. And again, this fund is raised years ago and that, Andrew's this is been a trip.
Like the folks, so I also He was like, can I get grandfather? He's like the grandfather for a lot of folks. Yeah. Which is just his content was like the thing that you learned for
Ryan: free the first time. That was like the most referenced video for learners. I wonder if you for a godfather, he's not that old.
Yeah.
Aqeel: Oh yeah. That's, sorry. Yeah. I'm teasing a hundred percent. Has, what has gone on in your thesis on, in, in terms of how you guys are going through your selection of criteria, if you guys are updating the sourcing and that decision of converting to doing business with the founder with this massive influx of the end of 2022.
Man. Yeah. What's that conversation?
Ryan: Defensibility comes up a lot. Okay. Like we say, look, Akil can go build this in a weekend. Does, what does that do to the to the potential success of like this One thing that we want to work on is our approach defensible enough against everyone else trying to do the same thing.
So it's a lot more emphasis on that now than there used to be when you can just lean on there. Were limited folks with enough AI expertise to make something like that work, and frankly, I'm all for it because I think it's raised the bar and it encourages people to be more innovative. So I welcome that.
And it's also it's also reiterated the importance of the of the founder like we talked about before, like when other opportunities to differentiate in terms of, expertise or complexity, et cetera are are eroding. Then at the end of the day, the founder's ability to to clarify a concrete product vision.
Win the hearts and minds of not just not just their their teammates, but then also follow on investors is what wins the day at an early stage. So you gotta have done your homework, like the product needs to work and the problem needs to be validated and verified. You can't just like, sell folks snake oil and expect that not to haunt you.
But but founder matters a lot. Yeah,
Aqeel: that makes a lot of sense. What what's different about economics what of interest you might be considering considering. Do you know that you have like signs orders of magnitude more overhead with each company you're working with now you have your own staff helping solve problems, being that micro or sorry, that, that smaller co-founder Yeah.
Ryan: Role. Just so that
Aqeel: Investor, or sorry if a founder's looking. To figure out where to get funding from and depending on what their needs are, like what, what's a better fit for a venture studio? For there's a few of these, but like, why might they go to Venture Studio versus others, and how do you, what does Venture Studio want to make these things
Ryan: worthwhile?
So there's a great book called Venture Studios Demystified that will break out a lot of benchmarks and examples that I think your listeners might find helpful. Before I answer, I think it's important to recognize that Y Combinator basically set the market at, giving 7% away for access.
And I don't want to, I'm not intending to denigrate it, but I'm just 7% away for access to a really excellent alumni community. Some mixers and events that are being thrown a Slack group and, demo day. Okay, but not really the hands-on stuff that we were talking about before.
Not helping you with recruiting, not helping you with the actual prototyping and development. Not having basically a founding PM on the part of the builder that's, right there alongside you in the trenches. So that's for better or worse, 7% is gonna be the floor for what you would expect to give away.
And we're working with an incubator, a studio, or an accelerator. What the book would tell you is, at a minimum a studio is gonna need because of the higher overhead, at least probably 10% on the of common chairs on the cap table in order to in order to make it worth their while.
We, we don't target a given percentage though. Like we we establish like a given ratio with our with our fis, and we let them decide and say like, all right, how much do you think you would need on the cap table in order for this to be an attractive opportunity for you? And we'll take a ratio of that.
In exchange for the sweat equity that we're that we're putting in. And, having a employee stock option pool as part of that is part of the equation as well. So we're all on the same side when we're having that conversation. It's about how does this cap table need to be structured in order for one, us as a overall team, including the FYR to be happy.
And then two, for follow on investors to also be happy to see enough of an opportunity there. There are other studios and other teams that if they do a ton of their stuff in house, I've seen things like as high as 60% on the cap table, and that's just ridiculous. That's, there's very limited opportunities I think for a fall on investors to look at that and say, how much is left for me to actually take in exchange for capital.
So I, some folks try to fleece you, but finding a team that wants to work with you. As to earn their place on this idea that we've already brought de-risked and we're just saying, we wanna work with you on this that, that's the right approach in my view. But of course I'm biased.
Yeah. There's something
Aqeel: to be said about some traditional venture might be nice, especially earlier stages. It's nicer to have the aspect of through some, there's a threshold of due diligence and things of this nature. But volume on the checks you're cutting just help you plus your bets so that you can get that thousand x thing without having looking at it again and boom, hugely code the event or follow on opportunity comes out of these, like a bunch of these 250 K checks or something like this.
Yeah. So with a venture studio are you getting a little bit more of we want these individual things to perform cause we have much more skin in the game and so that's might be more performance. You might be focused on things like cashflow and profitability. And like fine tuning this versus retro scalable, especially for pre stages, there is somewhat incentivized on this double-edged sword of increased things to look good to sell to the next stage. As a way to get their business model. Maybe the founders don't get as considered in that model versus what might be a different experience and like the founder's longevity or like economical upside in a venture studio. Do you get the question?
Sorry, I wasn't the best
Ryan: here, but I think there's kind of two questions that are baked into it. There's if I understood it correctly, there's, how do you think about the the amount of capital that you're gonna allocate such that you lengthen the time or right size, the time that they would have to go back to tap the market again for more.
That's, one piece of it. And then how does the founder longevity factor in? Those are. Two of the questions that I had extracted.
Aqeel: Yeah. I guess it was rambling. I was saying so traditional venture might be more focused
Ryan: on pumping out a lot of checks. Oh, yeah. Yeah. Shots on goal is still yeah.
Is still super key. It doesn't mean that we skimp on or try to ship things that we don't that we don't love. Yeah. This makes sense.
Aqeel: Yeah. And mentioned you'd want 60% of the business in that example. Because if you know that this thing is a money maker, you just wanna own that
Ryan: business.
Aqeel: But then there's like scalability of we're at the top of that
Ryan: market. You wouldn't be able to, you wouldn't be able to go to go to other you wouldn't be able to attract an fi r with a founder residence with that kind of a tactic. Hundred percent. And follow on investors would be like, absolutely not.
You're gonna have to adjust this. Everything that we have done, we've built 33 companies so far in, in our first fund. Everything that we've done has been calibrated such that it's, fair to us. It's fair to the f i r It's fair to follow on investors as well, subject to, some wiggle room as we're able to bring in additional investors to into the preceded round.
We try to make everybody happy. Wow. This
Aqeel: is awesome. Yeah, I think there's lots to be said here. I've actually been seeing a lot more studio studio entities pop up. Yeah. Actually this year it's still single digit number of companies, but yeah, they've sizable funds.
Something like this. I, yeah, I guess I'm pretty curious.
I guess my, my. Justification of why that is, is because you got a lot more of these domain experts that need that infrastructure, entrepreneurship, building product going through what it means to build a scalable venture that are like, Hey, can I do a thing now? And they attract more founders and residents that might not necessarily have started out to become a technology company, but yeah. They got the 20 years of accounting and a bunch of relationships, or maybe even book of business as a cpa, firm owner or something that, that's attractive for in venture to the operators. And that's how I've justified in my brain is oh, they're all entities popping up. And I guess it's just a different set of risks for some amount of upside. And this folks have different appetites and different ways of how they wanna go about it.
Yeah. The more
Ryan: that you can, the more that you can model your your studio as like an F1 garage that somebody could walk into and. They have all the parts and all the resources that they need to, work on work on the thing, work on the company that's right in front of them and to solve the hard problem.
The more that you can set that up the more value we've seen the firs get out of the experience and extract from that. And, we've had some very high quality founders and residents like we were talking about, the former CEO of of Tinder, Renata Nyborg who who built a company with us you know, late last year into early this year.
And she's, tremendous a tremendous force. And Eric Brisson and Andy McAfee who have been looking at how AI is transforming labor markets for a long time. Like we built something together as well. And it was because we've been creating this, F1 garage to.
To have these resources on hand to answer these questions very quickly and to solve these problems together, we're able to attract excellent founders like that. There are other places that, you'll put a bunch of folks in a room, throw stuff on the board, and then decide what you're gonna tackle.
That can work as long as you are very disciplined about speed, about being able to fish or cut bait and be comfortable determining when you're gonna let something go. Because I will tell you, it is you take longer to to drop an idea than you do to to see it through to the next stage that's been, it's my experience as an optimistic PM where I want to make things work.
If I believe that there's a chance it can and there's no love for that. And in the game when you have a different model and you're just trying to ship as many, apps as possible. That solve one problem and one problem. But you're just fishing at that point.
Nice.
Aqeel: This is awesome. Let's get into the, towards the final section on the fun stuff. Like existential Oh yeah. Questions. And then we have around slightly more concrete implications within the near future. So curious about your thoughts on like at high level.
High level too. Yeah. Because it's lawsuit to be right, this is like an infinite subject of the moment that's gets into speculation, which is no, not
Ryan: something we do today. Happy to do it at time.
Aqeel: Yeah. Curious about how you, cause you have some involvement in terms of advising and forming some stuff with folks out of Stanford University on API safety
Ryan: Initiatives. Yeah. Research or work. Or projects. That's right. So I'm curious to dive a little bit more into What does that mean? And I'm
Aqeel: sure AI fund's been like, hammer these kind of questions now. Yes. More than ever by press and things of that or like folks in the enterprise
Ryan: and stuff like that.
Or? Andrew's been much more interested recently, ever since I mentioned Eric Bri, he was one of the signatories on the on the piece from the Center for AI Safety, which is run by, Dan Hendricks, an excellent professor at her PhD at at Berkeley who published a great paper on how ai on how there are parallels between how we forecast AI development and natural selection, which is a very interesting topic.
Yeah. So when when safe.ai had released that, and then there were several signatories within, our network that sign onto it, Andrew was like, oh, okay. I should, I should start. Engaging with more folks on this. And he's been, won't speak for him, but he's been having lots of conversations with with very talented researchers in the space about exactly this.
Without getting into extinction risks, which is a totally kind of different subject the Stanford AI Alignment Group, which is part of sei, the Stanford Existential Risk Initiative looks at how artificial intelligence could be how the risks associated with advanced ai both say for catastrophic risks in terms of what could represent a irreversible one-way door of change in terms of our relationship with with the world and with ourselves.
As well as just more existential crises where it's like a pretty significant inflection point. In, in how we, engage. So the alignment field is a really broad still under formed research area for which there isn't yet a broad consensus on what are the things that we need to prioritize in terms of research.
There may be probably under 350 full-time AI researchers focused on this, on the planet right now. So it's a very small community, but I think thanks to the work of, Dan and Oliver Jang at at Center for AI Safety and for many others like Paul Castiano at Alignment Research Center it is coming up more and more in conversation.
There are lots of places that we can dive into on alignment. What are some of the questions that are top of mind for you? I think
Aqeel: it's going to be a combination knowledge. When you get to more of the fatalistic kind of results consequences versus
Ryan: How things
Aqeel: impact economies.
And, we're seeing some for me, a very alarming set of signals are happening right now is more so on the latter there. I guess I'm pretty curious on that ratio is pretty tough to must, right? It's 3 50, 3 50 full-time folks
Ryan: on the planet. It is not lives
Aqeel: billions or the future of the
Ryan: species in itself.
They, they're attempting their grow, of course. Yeah. But
Aqeel: Not like exact questions more. So it's Hey, how does AI fund feel like at effort yourself? Where do you see like ball to growing these numbers out in a useful way? And then how are we seeing these things and the thing in the middle between economy and.
S risk is
Ryan: what they call it, right? Yeah. X risk
Aqeel: consequences, the governance and how this trickles in society. How you get a, and enforce, or how you get a shift and protect. I guess my questions are like, where are, I only have a solid concrete question. It's more just like a, where are we at today?
In your eyes what do you think needs to happen? And not to say how to go about doing that, but just the whole point of we're here, so
Ryan: this is, so we might as well go there. I'll speak on, on behalf of myself I'm not gonna, nothing I say here reflects the opinions of of AI Fund or Andrew or anybody.
Yeah. So I wanna make sure that's clear. But I would say that, compared to maybe two months ago, I'm more optimistic now about the trajectory that we're on in terms of the overall conversation that AI practitioners are having or having with these alignment researchers to instead of.
Having this sort of like DOR type of meme that's just painted all over all over that community, which I think speaks a little bit to say some of the dire predictions that have been made from from other researchers that end up getting disproven just through, through progress.
There are very real practical implications for what these guys are researching, working on that. Do give me pause. I'll start with something that's more practical and then I'll, just get, give you just like a fun toy example that sometimes, keeps me up. But there, there are teams that are investigating it's pretty frontier capabilities of these models.
Dangerous capabilities. Research is like a subset of this where you have teams of researchers that are attempting to. Elicit certain behaviors from these models that would be incredibly dangerous if released to the public. So the ability to say deceive, and I don't mean specifically lie to lie to a human, but to have an objective function, but act as though you have some other objective function.
So you have your own goals, but you're acting in a way that others think you are. You are acting in accordance with the goals that they assigned to you, but secretly you are setting things up such that you're gonna come out advantageously at the end of it. That deceptiveness that deception is is very worrisome because, there are open questions about how do you detect that if it actually is released.
How do you correct for that? If you do catch it then so that there's lots of other things that sprout out from, deceptive model research like cord ability, the willingness of a model to turn itself off basically or to change its objective function when you ask. It's a really interesting very anthropomorphic stuff that at the start of the year before I got more involved with the AI alignment group at Stanford, most of my experience was with like very narrow models that had limited the phone.
Yeah. Compared to if I were to look back on my perspective that I had at the start of the year, like in January before I started meeting a lot of the folks at the AI lineman group and taken the student run class at Stanford I had a very. Pessimistic view on, or I assigned a low probabilities like this being a problem.
I said I mostly worked with narrow models. These are narrow intelligences. They don't, they can't exhibit these capabilities that everyone seems to be worried about. And then I started to read, and then I started to learn, and then I started to talk to these folks that are actually doing this research and and showing how these models were challenging my prior razor of saying a general intelligence versus a narrow intelligence.
And it's, totally changed my outlook. And I said, okay, we really, I'm not saying stop everything, but I'm saying let's actually, be honest, intellectually honest with each other and say, we should be looking into this more. And we are, and I'm very glad for that. So that's, a practical thing to, I observed my favorite everyone, my favorite dystopia, which we've talked about.
Is, when says oh, if you need ais to observe other ais and make sure that they, are not exhibiting deceptive behaviors. And so then you have this long chain of a whole bunch of ais observing each other. Where when you apply that to a, I don't know, complex economic system, you create this accountability chain that's totally incomprehensible to any one human's ability to understand how the system works.
And that's something that people have written up about, but that also doesn't really, it's not that much different from how the world is today as a complex system of systems. Yeah, exactly. That's interesting. And then if there's a
Aqeel: system, there's a gamification
Ryan: to a two. That's incredible.
Aqeel: Some of the one examples of the con concrete example of the thing that was concerning me is are alarms me, I don't know concerns the right word, but just is
Ryan: Sure. Big
Aqeel: flack or something. Open AI and Sam openly talk and talking about e d i structures, with the government.
Ryan: Yeah.
That is like a big sign of okay, some sort
Aqeel: of like big tie coming, if that is like for somebody in this position to be doing that. Giving their background and then, we have high face hands, Sam Altman speaking at Congress recently. It's something big happen. I think a global ation layer of the future of this, the economy and how people where to be.
Curious what your takes on like future work stuff
Ryan: and whenever Sam Altman and Clem DeLong are agreeing with each other that's, that's something to observe. Cuz Clem is positioning himself in huggy face as open source. Yeah, true. Open source AI research development, community, et cetera.
In contrast to, opening AI trajectory, which perhaps it is because they have been more at the forefront. That they are maybe exercising more caution, but you also have to ask what are the commercial incentives for someone to do what they do when they're in that position?
Yeah. That aside this comes back to just the razor of automation versus augmentation of of jobs in the economy. And obviously, policy makers are gonna be concerned with that because if there's perception that the proliferation of this technology is gonna reduce employment for their constituents, then that's all that matters.
Yeah. That's going to keep them in office if they can prevent that. But you know what, Eric and Andy and then Eric Rayon, Andy McAfee and Daniel Rock who wrote the, who co-wrote the GBTs r TBTs paper, have done a really effective job of. Is being able to more granularly assess what jobs, and not just jobs, but what tasks within those jobs are more suitable for for automation versus which ones would most likely be augmented by the adoption of of AI in in those roles.
So we mentioned, the expert teams earlier lawyers, accountants, et cetera, where you need that expert layer, that expert in the loop to do the job very well. Those are, high candidate for augmentation. Something that's more relevant to you and me, and then your listeners as well.
Like the, the simulator that I was showing you just before we started I wrote that last night in seven or eight hours when it took me weeks before on the prior version because. Now I had access to, an AI assistant within two keystrokes that could unblock me whenever I needed a snippet or whenever I needed to debug something.
And it was, amazing it 10 x my productivity. It didn't replace, the writing of the code or the creation process, but it augmented the workflow because it was it kept me in flow and it was out of the way. And that was good. But when you get into a job that has a higher proportion of tasks that are probably highly suited for for a model to do, and this isn't like some, some alchemy.
There's a website called O Net that the federal government had created that you could go to it right now and you could search customer ser customer support representative, and then see not just, the 15 tasks that comprise that job, but also the relevant skills associated with that job as well.
It's a publicly available resource, and if you were to look at that, you would probably make a conclusion that it seems like a lot of these tasks are highly, relevant for an ML system to be able to do just as good, if not better at the replacement
Aqeel: or at the augmenting side of
Ryan: things.
It depends on the specialization of of the support that's required. But keep in mind for like most for most customer, like this is an aggregate sat, but the about 75% of of tickets are solved within first contact. So that's fcr first contact resolution. On average, 75% are done within the first time you talk to a csr.
It's about a five minute average resolution time for for that as well. So in that time, if you're like over text and we'll assume I don't know, 40 words per minute or something like that on a five minutes, resolution. Then maybe that's 533 tokens or something of text that's being generated from both the the customer and the support agent.
And the support agent costs 35 bucks an hour. Then they can handle, a little over 12 tickets in an hour. It comes down to like maybe three bucks a ticket. But if you were to use, just G B T'S api you would be able to reduce that cost of basically de minimus you, it'd be like 10% of of what that would cost.
Now it doesn't apply to everything. There's still the 25% of things that require more empathy and, higher touch. And these are edge cases. But if we're already at the point where 75% of the contacts that you make to customer service can be resolved within five minutes then, and this isn't, Great politically to say, but I don't have, as I don't have as much to say in terms of that process itself not being automated.
I would instead wanna reallocate existing resources to the higher complexity tasks because, that's where you're gonna need more manpower and more empathy and creativity. But for everything else where, you know, you could just slot in an l m or some tool chain with a s r and then text to speech I see a lot of automation happening there.
Do you have any last question?
Aqeel: Any commentary on just we might have a thing or things or entities that are beyond human capability around this day-to-day society? What is that world we wake up in? Yeah, like a sandwich or coffee or something. What happens?
Ryan: Guess are you asking if I, if I think there's a, there's life out there or no, it's
Aqeel: more like we're creating life light
Ryan: things.
Yeah. Here.
Aqeel: So like things are driving by themselves while
Ryan: they start coming for themselves, where they just
Aqeel: go to the point where one, the whole thing of having a giant corpus, this one gets like AGI conversation. Yeah. Love the entirety. These even open source project right now on Twitter, it's going viral.
Alexandria ai.
Ryan: Yes. And the entire internet, right? Yeah. That, that was the that was where they plotted all religious texts in Vector Space. Yeah. It was amazing.
Aqeel: Coordinate my place every day. Fascinating. Cool. My brain like pick up brain for a few seconds. The whole concept is that's can you start somewhere?
If you have those logs being tracked through the internet and then you have an adversarial agent getting in there and get great. Now I know everything by humanity in one Go. I can out support everybody. Especially
Ryan: cuz you had GPT four being able
Aqeel: to do a capcha or we able to convince. Someone on TaskRabbit.
Ryan: Yeah. That was part of the that was deceiving easier, but it doesn't mean it was a deceptive model. Yeah. That's, important clarification that the si of folks would be upset at me if I didn't clarify. Yeah, absolutely. But yes, so the capability is there. Sure.
Aqeel: Yeah. And so now we've got, it's helping out society in terms of an objective level, like it's coming for you, that's making sure you're transporting safe places and help all these like little mundane things of life.
But that also might mean kinda learn to think about what's better for humanity, what's better for humanity over time. And that takes the fact they should know more than people. So you have a slightly more intelligent thing or going towards that I dunno what your thoughts are on this also again, we talked about courts ability.
We talked about this is the AI's conversation right now. And honestly relentless amount of powers they put into the work of like, how do we have these safe systems walking chain because the. Advancements are happening, right? And the floodgates are
Ryan: open. So there I would give two answers to that.
The first is more higher level about take societal goals and what that may mean for a super intelligent system. And the latter is more of a call to more optimistic call to arms. So the first one and this is why the Alexandria Project is so interesting. Man's search for meaning has been part of our struggle as a species for as long as we've been alive, ever since we climbed out of the muck.
That is a function of of time and our ephemerality. We have limited time on this earth and man has for millions of years tried to find meaning in what we do. And and the good deeds that we do for others. Artificial intelligence on the other hand, or or just synthetics may not have the same constraints of time of ephemerality, which which causes say this search for meaning.
In fact, most synthetics that we're gonna create are created with a very specific purpose that are assigned to them at inception. And there may be a time in which they could learn to set their own objective functions. But but it's entirely alien for me to try to imagine what a being that is immune to the passage of time would would assert is the ideal state of society for which is constituents Live and die.
It's something I think about. I don't have a lot of very good answers to it, but I do think that it's important to recognize the inherent differences between synthetic and biological life and what that may induce when, we have that that value crisis. The more practical, real term real time thing that I would ask, and this comes back to all of our conversation that we've had about fomo, about, new emerging capabilities that are happening every day.
I worry that there are entrepreneurs and hackers intentioned, but out of anxiety and out of fear end up creating something that will certainly make a lot of money. They'll, probably get rich quick, but is not conducive. In fact, it's antithetical to to the progression of, of society, the progression of the human condition to our own health, mental and otherwise.
So we saw this with the proliferation of social media, how it made, it was a totally unintended consequence, but how it divided society in a in a very reinforced and difficult to reverse way. We're seeing the same thing right now in terms of people that are trying to solve the loneliness problem in different ways.
There's the, there's the easy way to do that. The only fans model which I think really came out during covid and came out hard. And there are some good things that come out of that, frankly. But there's also the wrong way. The ways that that keep people stuck in their loops and unwilling in fact violently unwilling to break out of that and engage with others in a pro-social way.
And there are apps that I've seen in which folks have built an incredibly unhealthy attachment to, their their, their ai girlfriend or boyfriend and have actually then grieved at the loss of them as if they lost a a loved one when, the app has changed or when the model parameters were upgraded or reset or something.
And then it's like they're talking to to a totally different person and that has real consequences on. On people. I hope that we use this technology for good as much as we can. I hope that people will have the ability to look at the road ahead, observe where the forks are, and have the discipline to to make the right choice with what to use or what to to do with the technology versus acting out of FOMO and anxiety to just try to get rich quick.
It is also, it is the the advantage of evil that that good people are unable to to envision what what that is capable of. And I hope that people will continue to be very mindful about how to use AI for good. And to build, reinforce defenses against those that would use it for guilt.
Yeah,
Aqeel: that's beautifully said. Yeah, LA really agree with lot of it, basically your entire message. This is awesome, man. We can keep chatting forever think can bring about most things in life in
Ryan: general. Certain. This is great. I feel like this a good pausing point.
Aqeel: Mic drop moment bit.
Is there how might folks is there any final
Ryan: message you have in general? Just just keep building, not everything has to be, a unicorn. Like I said before, I'd rather folks just, build cool shit and stuff that hal that solves hard problems. And try not to stress yourself out too much about is this gonna be the thing that really hits it big?
Just, you'll know when it is, but in the meantime, just keep building. He built cool shit. This is great. Let's,