top of page

Highlights from Anorak Ventures' event with newkid at LA Tech Week 2022.


As a seed-stage venture capital firm investing in emerging technology, we meet incredibly smart technical founders who have created astounding technologies and products. However, we regularly see these founders struggle to connect with an audience and get that audience from "spectator" to "customer." We've worked with many of our portfolio companies on marketing and messaging to help these great products become great companies, and we partnered with branding agency newkid to capture some of these lessons at our talk "PERCEPTION IS REALITY" at this year's LA Tech Week. We've summarized that talk below.


1. PEOPLE AREN'T NUMBERS. Startups often define their target customers in terms of broad demographic slices, like "fitness-oriented women" or "services businesses under $10 million in revenue." These are a good start to help narrow your focus, but you can't build a company around this kind of audience without a deeper understanding of who they are.


This is the first, and most important, step towards developing your company's positioning. You need to understand your customers more broadly than how they would interact with your product. You need to meet them, talk to them, and understand them without trying to sell them anything.


If your company becomes very successful, your company, and you personally, will have a leadership role in your industry, the way that Tim Cook at Apple or Satya Nadella at Microsoft are important figures in the global technology industry. Just like you wouldn't run for President of a country without understanding its people, its history, its music, its movies, its stories, its food, its culture, etc... make sure you understand every aspect of your customer, not just those related to your business.


Homework: Spend a day doing customer service. Join a subreddit. Jump into a mosh pit.

2. NO IS THE KEY WORD. As a startup, it's hard to say no, especially to a customer who wants a specific feature and has an open checkbook and is ready to pay for it. A customer might truthfully say, "I'd buy this if it had SAP integration!" And they might pay you enough to cover the costs of building it, and then some. But a startup's biggest advantage is focus, and that SAP integration delays the rest of your vision, creates openings for competition to enter, and makes your product more complex, and thus worse, for the rest of your customers who don't need it.


That's not to say that you shouldn't listen to your customers, but your customer feedback shouldn't go directly into your backlog. Your deep insight into your customer (see point 1) is what gives you the confidence to set the vision for your company yourself, and stay true to it. After all, if you're just building what your customer tells you to, you're building someone else's company for them.


Homework. Delete 5 user requests. Move them from Priority 2 to "Priority Never."

3. CLARITY IS KING. Good startup founders know their markets better than anyone else in the world, and hire other people who are also deep domain experts. But when they communicate to their market, they can fall into the trap of being too abstract or too vague in their messaging.


Vague marketing phrases like "Reimagine your X stack" assume that someone is thinking about X, understands what an "X stack" is, and even wants to "reimagine" it -- it's likely that none of these are true!


A startup needs a message so clear that anyone in your target audience immediately understands what your company does and why they would want your product.


This is an area where startups have a strong advantage over large companies. Large companies sell many different products to many different types of customers, so their marketing is necessarily vague. But when you look at the most successful companies when they were smaller, you can find the focus and clarity it takes to succeed as a startup. For example, Twilio's home page in 2010 describes exactly what it sells: a "web-service API" to "voice-enable your apps" or provide "text messaging for your apps."


12 years later, in 2022, Twilio is a public company with $3 billion in revenue, selling hundreds of different products. Their home page is necessarily vague, referring to "data-driven customer engagement at scale."


Don't look to modern megacorporations for clues about how to market your startup -- look at how they marketed themselves when they were your size. Find a way to describe your company that is so clear that your most clueless investor (hopefully that's not us!😄) could sell your product for you.


Homework: Ask your best friend to describe your startup in one sentence.

4. ATTENTION IS EARNED. People will read a 10,000 word magazine article about Apple or Facebook because people already know those companies and want to learn more about them. But as a startup, you're just one of a million things that people could be paying attention to. Why should they pay attention to you? You need to have a first sentence that earns the user's attention and gets them to read the second sentence.


A great example of this is Apple's original advertising for the Macintosh 38 years ago, when many people found computers perplexing and intimidating:

There's a LOT of information in this advertisement, but the opening tagline gives the target audience a reason to read further.


Homework: Write a 1-2-4 pitch for your company. One sentence to make your audience care. Two more sentences to explain what you do. Four more sentences to support your claims.

5. EDUCATION IS EXPENSIVE. You have 30 seconds to talk to a customer who's been alive and learning things for 30 years -- instead of teaching them everything about your product from scratch, it's far more effective to tap into knowledge that they already have.


The "Uber for X" marketing approach is definitely a bit tired, but it existed for a reason -- instead of having to explain the concept of an on-demand app, the "Uber for X" formulation more concisely tells your customer what they need to know.


Homework: Formulate your company in terms of one or two concepts that your customer already knows.

6. WHAT GETS REPEATED GETS REMEMBERED. A customer won't remember something after seeing it once. The way to go from a brand impression to brand recognition is consistency and repetition:




As a startup founder, you'll be giving your pitch thousands of times, across multiple channels like video, Web, and in-person. Saying the same thing thousands of times can get exhausting, but resist the temptation to "change it up" or "keep it interesting." Definitely don't just have canned responses to all situations, but find your handful of key phrases and visuals, and use them across every situation.


Your target customer is hearing your pitch for the first time, so make sure they're getting your best material.


Homework: Pull up your website, app, social media, emails, pitch deck, etc. Do they all look and sound like they came from the same brand?

7. YOU'RE NOT ALWAYS THE EXPERT. Founders are accustomed to "doing it themselves," but an important part of being an effective founder is knowing when you need a domain expert. You wouldn't try to build your company's iOS app if you've never written a line of code, but we see founders trying to build their startup's pitch deck with built-in PowerPoint themes.


If you had to make a single three-point basketball shot and your life depended on it, would you take the shot yourself, or hire Steph Curry to take it for you? Of course you'd hire Steph Curry, and he doesn't work for free.


As in any market, you don't always get what you pay for just by paying for it. Find a trusted referral to a branding and marketing agency that has done work that you respect. Set clear guidelines on expectations, and if you're unsure about fit, start with small deliverables before moving into bigger projects.


Homework: Ask for referrals to three branding or marketing agencies from founders whose branding and marketing you hold in high regard.

8. PERCEPTION IS REALITY. Your customer often has to invest a huge amount of trust in you, by providing their personal information, medical record, banking passwords, or even access to their home, and they have very little information to go from. Is your server's root password "password"? They can't assess your trustworthiness directly, so they index very strongly on how you present yourself.


What to you is a spelling mistake is a red flag to a customer. Poor attention to detail is the quickest way to lose hard-earned customer trust.


Homework: Check every word, every pixel, every button, every interaction... and do it again next week.

 
 

Learn more about Anorak Ventures at https://anorak.vc or Newkid at https://newkid.services.






966 views0 comments

I’m pleased to announce that I have joined Anorak Ventures as a Partner, working with Managing Partner Greg Castle to invest in and support exceptional founders in emerging technology (more about me here). I’ve described Anorak’s area of specialization as “Computing in the Third Dimension” – in this post I explain what that means, why it’s novel, and how it will impact the future.


Trapped in a box: the two-dimensional computing interface


The history of computing is widely understood as a series of “eras” of increasing power, each with their definitive leaders:

  • The mainframe era, led by IBM

  • The personal computer era, led by Microsoft

  • The Internet era, led by Google

  • The current ubiquitous computing era, led by Apple in devices, Facebook and Google in consumer services, and Amazon in cloud computing

  • The AI era, which is still in its infancy


Each of these eras made computers simultaneously more powerful and less expensive, making computing more accessible. Cheaper silicon birthed the personal computer era, broadband adoption unlocked the Internet era, and the launch of Amazon Web Services in 2006 and the iPhone in 2007 kicked off the ubiquitous computing era. Through these eras, computers have consistently become faster/better/stronger every year: from VisiCalc’s 254-row limit to petabyte-scale data lakes, or from Usenet posts to Skype calls to FaceTime, computers have gained a bigger role in our lives as they have become more powerful and easier to use.


Despite the onward march of technological power, our experiential interfaces with computers have stagnated in a two-dimensional paradigm. The original Apple Macintosh shipped 38 years ago with a mouse, keyboard, monitor, and printer – the same user interface that we use today.



We still work with computers through an interface invented in 1968 and popularized in 1984.

Smartphones introduced the multitouch interface, but still on a two-dimensional screen. Our entire mental model of software revolves around two-dimensional actions like clicking, dragging, and scrolling. Tellingly, the organizing principle of Web design is the “box model,” forcing every element on every website into the confines of a “box.”


But our sensory systems, and our minds that integrate their input, are inherently three-dimensional and spatial. Written text is ~5,000 years old and pictorial art is ~50,000 years old; spatial reasoning is over 50 million years old, and our most highly developed information interface. We can easily walk through a cocktail party and identify the conversations that are interesting to us, or walk through an office and tune into the right conversations to stay informed. Without spatial reasoning – if we simply listened to all of these overlapping conversations in an audio recording – it would sound like an incoherent jumble. Through two pandemic years of sitting on Zoom, staring at each other in little boxes, we’ve each learned for ourselves that two-dimensional computing simply cannot capture or represent the vibrancy of our three-dimensional world.


On two-dimensional computing surfaces, we lose our mental superpowers and our communication superpowers. Our sarcastic remark is misunderstood as sincere; our request for clarification is misunderstood as a passive-aggressive attack. As a result, our physical selves inhabit an entirely different world from our digital selves, and our lives feel strongly bifurcated between “IRL” and “online” interactions.





We want our online interactions to feel "real" -- they can certainly have major consequences in the physical world - but our two-dimensional online interactions rarely have the emotional tenor of our IRL interactions. After two pandemic years limited to primarily online interactions, restaurants, airports, and highways are packed with people seeking the richer texture of the physical world.





The way forward: computing in the third dimension


The good news is we are in the dawn of a major computing transition as important as the advent of the Internet. Computing, having been “trapped” for decades inside the world of structured databases and two-dimensional inputs and output, is stepping out into the physical, three-dimensional, rough-edged world. At Anorak Ventures, we call this trend “Computing in the Third Dimension,” and some of its pillars include:

  • Computers are understanding the physical world with computer vision and artificial intelligence, capturing much deeper insights with far less manual data entry

  • Computers are acting in the physical world with robotics, turning our understanding of the world into tangible outcomes

  • Computers are creating synthetic worlds through virtual reality and augmented reality, creating experiences for users that have all of the vibrancy, communication bandwidth, and emotional timbre of physical-world experiences inside entirely constructed environments

  • Computers are using generative AI to supercharge these synthetic experiences, allowing users to “construct their dreams” with experiences unattainable in the physical world, but sensorily indistinguishable from reality.


In all four of these areas, the common thread is that the interface boundaries between digital and physical experiences are being dissolved, bringing the power of technology into the physical world with unprecedented scale, and bringing the power of the physical world into the technological domain with unprecedented detail and subtlety.






Computer Vision and Artificial Intelligence


Computing has always been a tool for calculation, record-keeping, and analysis, and their correctness has always depended on the correctness of their inputs. In 1864, Charles Babbage, the father of computing, wrote:


“On two occasions I have been asked [by members of Parliament] - ‘Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?’ I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.”

Less eloquently referred to as “garbage in, garbage out,” data entry has long been a critical business function. When data entry was performed by humans, and double- or triple-checked by other humans, only high-business-value data even got digitized in the first place. The IRS digitized its collection operations to catch tax evaders, manufacturers used ERP software to manage their operational planning, and financiers digitized capital markets to gain better visibility and control of their risks and opportunities. But for every occurrence in the world that was digitized, billions or trillions of undigitized interactions went completely unrecorded.


This started to change when computers started doing their own data entry, starting in earnest in the 1970s. Banks started using optical character recognition to automatically record check numbers, and retailers used barcode scanners and later RFID scanners to automate tracking and inventory. These technologies lowered the costs of data acquisition, but only for pre-defined scenarios with standardized data schemas like scanning known and labeled objects at a cash register.


Recent advances in sensing hardware and machine learning have vastly increased the surface area of automatic data capture and analysis. Instead of setting up our world for computers, by adding barcodes and RFID tags to products and placing scanners in employees’ hands, the data acquisition and analysis can run passively without a human in the loop. Target’s security cameras can track a box of diapers from the warehouse to the store to the trunk of your car. The hardware to acquire data, such as cameras and accelerometers, are getting cheaper and more power-efficient, while the machine learning algorithms that analyze this data are getting increasingly powerful and able to extract higher-level insights. This enables new human interfaces like interactive voice and gesture recognition, as well as software that can analyze and react to data without any human interface.


Is this a good thing? Do we need or want to have an analysis of every time we sneezed, every dog that barked at us, or every blade of grass that we walked on? Perhaps not, but Anorak portfolio company SafelyYou is using computer vision to make our world safer for vulnerable populations.


SafelyYou is solving the extremely difficult problem of senior citizens being injured by falls. Falls are the leading cause of death for adults over 65, and even in nursing homes, where assistance is available, falls often go unnoticed because a resident cannot call for help after they have been injured by a fall. SafelyYou monitors a camera installed in the senior’s room, and can detect when they have fallen and immediately summon help. Not only can SafelyYou alert caregivers to a fall, but it can prevent falls – video review showed that one particular resident had fallen three times by sitting on the edge of her bed while watching TV, and simply putting her TV in front of the chair stopped the problem entirely.


It would have been prohibitively expensive, and intolerably intrusive, for a senior to be monitored in their room 24 hours a day by a human being. Computer vision and artificial intelligence is turning the entire physical world into an input surface, allowing vastly more information about the world to be ingested, processed, and acted upon.



Robotics


Tightly coupled with computer vision/artificial intelligence is robotics. CV/AI is a big step forward in understanding the world; robotics helps us turn that understanding into action.


Robotics is certainly not new – low-intelligence robots have been used for over 50 years in automotive factories to perform spot welding and to move heavy objects into place. Robots have even used computer vision for decades, such as in agricultural sorting to separate out unripe fruit. However, these robots were purpose-built for a single task, and often needed no intelligence or sensing feedback of any kind.


Today’s robots are vastly more versatile than first-generation robotics due to two major trends: the sensing hardware/machine learning trend described earlier, and the increasing power/decreasing cost of actuators: brushless motors, motor controllers, accelerometers, lithium ion batteries, and the inner-loop control software. Today’s robotics do not just mechanically perform an operation again and again - they can sense their environment and choose the right course of action situationally. The most well-known application of this is self-driving cars, but one of the most interesting applications to us is using robotics to conserve valuable natural resources.


Anorak Ventures’ portfolio company Irrigreen has developed a robotic irrigation head that uses hardware and firmware similar to those you would find in an inkjet printer to “print” a precise pattern of water onto the surface of a lawn. Over 30% of America’s municipal water goes towards watering lawns, and close to half of this water is wasted by traditional “plastic stick” lawn sprinklers that can only water in circles and thus have to be wastefully overlapped.


Irrigreen’s robotic lawn sprinkler eliminates waste and overlap by a tight orchestration of software and hardware. After the user configures the shape of their lawn on their smartphone app, the Irrigreen system uses rain forecasts and soil moisture readings to water the lawn precisely as much as needed, adjusting the angle of the head and the water flow rate as the head sweeps out a full circle:



Irrigreen's digital sprinkler head "prints" a precise pattern of water to minimize waste.

Sensing hardware, actuator hardware, controller hardware, embedded software, machine learning, and cloud computing all work together to deliver the experience that the Irrigreen customer sees on their smartphone app. Because of these interlocking pieces, robotics companies like Irrigreen are tremendously complicated to build and operate, but the founding teams who can successfully do so (and it is almost always a team, with diverse skill sets and work experiences) can deliver value that pure software simply cannot.


Robotics is turning the entire physical world into a computing output surface to match the rich input interfaces that computer vision and AI have enabled. In tandem, AI and robotics are allowing computers to, in many cases, even exceed humans in their ability to sense and to act. The AI in your Apple Watch can detect that you’ve fallen down and have an elevated heart rate; a robotic drone can now fly a defibrillator to you and save your life.



Virtual Reality and Augmented Reality


Virtual reality (VR) is a technology that may eventually eclipse the Internet in its impact on societies, economies, and human lives. I’ve written more about my most optimistic hopes for virtual reality and the reasons that I believe it’s poised to massively break into the consumer mainstream.


The long-term goal of VR has always been to convincingly emulate any experience. If a person sees a dog in front of them on their VR headset, and can pet the dog and feel its fur with their haptic glove, and can hear it bark, and can form a friendship with the dog over time… is it functionally any different from a real dog? That’s really a question for the philosophers, such as Robert Nozick and his thought experiments with his Experience Machine.


Philosophy aside, the Experience Machine is already here. Even Meta’s $299 Quest 2 can transport users into virtual worlds by feeding into their three-dimensional spatial faculties rather than as a two-dimensional windowed experience. When I play Beat Saber for even a few minutes, the feeling of being in an infinite space is so strong that I’m surprised (and a little disappointed) when I take off my headset and find myself in an ordinary room. The impact of VR is even stronger in social interactions, where the illusion of presence creates interactions that feel vastly more real than 2D video calls.


Anorak portfolio company Innerworld takes advantage of not only the increased immersion of social VR, but also the added psychological safety of a remote and anonymized connection. Innerworld offers personal coaching through VR using the techniques of cognitive behavioral therapy (CBT), but in a lower-cost, peer-to-peer model available to those who cannot afford a licensed therapist. This model, called Cognitive Behavioral Immersion, is not only more accessible than licensed therapists, but has specific advantages born of the VR delivery model. The sessions are completely anonymous, which could never happen in a physical service model, and this anonymity allows people to openly discuss topics that they find challenging to discuss in person, even with a licensed professional.


VR is Anorak’s first and heaviest focus area: Managing Partner Greg Castle invested in Oculus’ seed round in 2012, and less than two years later, Facebook had acquired the company for $3 billion, making Oculus the first of six unicorns so far in the Anorak portfolio. Oculus created the modern virtual reality renaissance, and we continue to invest heavily in the VR sector (OssoVR, PrismsVR, Rec Room, and many others).



The dawn of generative AI


Rather than a trend already well underway, like AI, robotics, and virtual reality, generative AI is in its absolute infancy, but accelerating explosively. OpenAI’s DALL-E 2 can construct an image from only a text prompt:

DALL-E 2 creation from only the caption: "teddy bears working on new AI research on the moon in the 1980s"
DALL-E 2 creation from only the caption: "teddy bears working on new AI research on the moon in the 1980s"


… while NVIDIA’s Neural Radiance Fields can synthesize a virtual 3D environment from only a few seconds of scanning:



Created by Karen X. Cheng. Link to Tweet


It doesn’t take a large leap of imagination to simply “speak” a virtual world into existence with a short prompt and experience it in VR. People will be able to spend time with their deceased loved ones, live out alternate lives and entire realities, experience historical events as though they were real, and enjoy experiences like space travel that would otherwise would be attainable only to the narrowest elite. Anorak Ventures does not yet have any portfolio companies in generative AI, but we are eager to invest in this sector.


Computing in the Third Dimension and the future of human-computer interaction


After 38 years of the mouse, keyboard, and monitor, computing is finally breaking free of the two-dimensional interface, and the boundaries between the physical and the virtual worlds are rapidly collapsing. In the next five years, we expect to see:

  • Continued improvement in AIs that source proprietary datastreams and derive insights from these datastreams

  • A Cambrian explosion of robotics, both in form factors and applications, to do everything from services to industrial manufacturing to healthcare

  • An increasingly greater amount of our “screen time” dedicated to VR, and VR being the best way to remotely establish the human connection that was so often found lacking in remote work during the COVID-19 pandemic

  • AI-driven flights of fancy that turn our wildest dreams into virtual worlds we can explore and eventually inhabit


I’m extremely excited to join Anorak as Greg’s first partner and look forward to investing in the founders who are building this world. If you are one of these founders, let's get to know each other: amal@anorak.vc.


1,071 views0 comments

Updated: Aug 3, 2022

In the first 10 minutes of this years Facebook Connect conference, CEO Mark Zuckerberg mentioned the word Metaverse 17 times. An hour later he announced that Facebook, one of the largest companies on the planet, was changing its name to Meta. But what does this nebulous term actually mean?


The term originated in Neal Stephenson sci-fi book Snow Crash released in 1992. It refers to a virtual, 3D videogame-like world where people are represented by avatars. Users, be they individuals or corporations, can build destinations like games, music venues, and social clubs along “the Street” which bisects the entire metaverse. In order to do so, planning approval and fees must be paid to a trust that’s responsible for server fees and general upkeep of the metaverse. There are multiple currencies mentioned throughout the book, both fiat and otherwise. In summary, there is no single company that owns the metaverse, nor currency that rules it.


While companies talk about building the metaverse, what 99% are actually building is more akin to a microverse. A microverse has little, if any, interoperability with other microverses. They can monetize through subscription fees and by selling items and powerups. They may or may not have their own currency enabling in game economies, and friendships and social graphs are microverse specific. Think Roblox.


Then there are macroverses, which are essentially a collection of microverses owned by a single entity. Items and access are still sold per microverse but because each is owned by the same entity a single currency may be used. Elements like identity, achievements and social graphs can be shared, although skills are largely microverse specific. Think EA’s Origin or Activision Battlenet.


Lastly, there is the Metaverse, a universal protocol that makes all things within it interoperable. It’s like reality, only digital with the rules existing in software rather than nature. The challenge is that while nature dictates the laws of physics, software is created by people who don’t always agree on laws resulting in things like varied countries and religions. It’s also why we have 8,100 different cryptocurrencies. The challenge is further complicated when people are incentivized to drive value to their particular belief system, which in the case of Web3 is a core principle. Over time universal standards created by centralized authorities are needed which, in the case of cryptocurrency and Web3 is somewhat paradoxical to their decentralized ethos, which is where DAOs can help. But I digress…


The conclusion I’ve come to is that while it’s unlikely in my lifetime that we’ll see the singular metaverse described in Snow Crash, I expect to see more interoperability between micro and macroverses. This will appear small and first, perhaps the ability to read data from a crypto wallet like metamask, but will ultimately become a functionality people come to expect. This is where I think the metaverse opportunity lies. In the small threads that can one day form the rope that pulls the world towards that universal protocol that is the metaverse.

220 views0 comments
bottom of page