
But honestly, I appreciate the Peanuts special more for how it looks directly at the Christianity of it all. It’s not trying to hide the ball. It is …
Why Linus’ Speech Matters (to Jews)
Shared by my good friend Shanan; a wonderful read.
But honestly, I appreciate the Peanuts special more for how it looks directly at the Christianity of it all. It’s not trying to hide the ball. It is …
Why Linus’ Speech Matters (to Jews)
Shared by my good friend Shanan; a wonderful read.
This also reminded me of the Monster/AB InBev deal mentioned in “The Little Book that Builds Wealth:”
To be fair, it is occasionally possible to take the success of a blockbuster product or service and leverage it into an economic moat. Look at Hansen Natural, which markets the Monster brand of energy drinks that surged onto the market in the early part of this decade. Rather than resting on its laurels, Hansen used Monster’s success to secure a long-term distribution agreement with beverage giant Anheuser-Busch, giving it an advantage over competitors in the energy-drink market.
Anyone who wants to compete with Monster now has to overcome Hansen’s distribution advantage. Is this impossible to do? Of course not, because Pepsi and Coke have their own distribution networks. But it does help protect Hansen’s profit stream by making it harder for the next upstart energy drink to get in front of consumers, and that’s the essence of an economic moat.
Once you find product-market fit you need to quickly scale distribution to own as much of the market as possible and preempt new entrants.
This post was originally published on Inside Q4, stories and lessons learned from the Q4 Inc. R&D team.
When I arrived at Q4 in October of 2021, we were entering a period of hyper-growth in Product Design, Engineering and Product Management. We had huge ambitions and needed to scale our team — fast. I had hired talent before and had conducted dozens of interviews in my career, but the task ahead of me was daunting. As a new hire myself, Q4 was a domain that was still fairly new to me. Plus, the company serves a specialized clientele and isn’t exactly a household name — and the job market is insanely competitive!
With the help of our CTO, my peers, our amazing talent acquisition team and under the guidance of our inspirational and very applicable company values (Grind, Hustle, Iterate, Compete, Care), we reached our goal. Here are some of the things that I learned as the hiring manager on this assignment.
Hiring is a flow, not a project. The point is to run candidates through a process and hire quality, diverse candidates in a fair and equal way as fast as possible — but no faster. Go in expecting that it will take time and unsexy work to do it well, but it will all pay off in the end.
Always be thinking about where a good person could fit with your team in the future. Circumstances change often, and quickly, so don’t ever assume the team you have today is the one you’ll have tomorrow.
You’re going to have interviews where the candidate is amazing but not for the specific role. When that happens, be up front with the candidate. Let them know they’re not going to get the offer for the position, and tell them why. Then, shop the person around internally and keep in touch for future opportunities.
Of the 16 people we hired, three have already been promoted from individual contributors to managers. I’ve also heard some really nice things from colleagues about our evolved team. Hearing this makes me proud. Hiring processes are a lot of work for everyone involved. We put a lot of thought and energy into our part of it, and it’s rewarding to see it come together well.
Much has been written about the Apple + Google Covid-19 Exposure Notification framework. This is the software that is now part of Android and iOS (13.5+) and powers Covid-19 detection apps for Android and iPhone like COVID Alert (much of Canada), COVIDWISE (Virginia) and dozens of other jurisdictions around the world .
I’m in Ontario and use COVID Alert on my iPhone 8 Plus. The apps are fantastic pieces of work from the Canadian Digital Service and its private sector partners Shopify and BlackBerry. That said, I have always wished for more feedback from the app itself. Something that gives me a sense of it actually working. I’m the first to admit that this isn’t a rational need. When you open COVID Alert here is what you see:
I’m grateful that no exposure has been detected! But the app doesn’t look like it’s doing anything. I know that that’s not the case. I know that it is doing stuff but that’s because I’m a nerd and because the Canadian Digital Service maintains the source for both the Android and iPhone COVID Alert apps on GitHub .
Well here’s one way. Both iPhone and Android allow you to see a log showing each time COVID Alert has downloaded a list of exposures from the COVID Alert server.
On iPhone you can see the log in Settings -> Exposure Notifications -> Exposure Logging Status -> Exposure Checks .
What I believe this means is that in that one Exposure Check done at 10:09am ET COVID Alert downloaded 246 Tracing Keys (“device IDs”) of devices that had had a positive Covid-19 test reported over the past 14 days. It also determined that my iPhone did not get close enough to any of those phones, for a long enough period of time, to warrant me getting a Covid-19 test. It’s pretty cool to see the app at work.
I would also love the app to help me understand:
You can’t tech your way out of a policy or political problem. That said, I strongly agree with the what Apple, Google, and the Government of Canada have done here. If the policy decision is to continue to deploy these decentralized, anonymous exposure notification applications on a voluntary basis then we need to keep looking for ways to make them more effective and more compelling to download and use. Sharing more useful information with people could be a way to get more people to use the app and better inform public health authorities on what to do next.
Distributed teams are hard. Distributed teams, where some people are in office and others at home, are harder. Widely distributed teams with people working across countries and time zones are exceptionally hard. Widely distributed teams in a pandemic are damn near impossible to get right.
Culture clash alone is an enormous challenge. I once had a fantastic American product manager piss off an equally strong director of engineering (in India) by complaining that the tool the PM needed “had fallen and can’t get up.” The director—and the engineers on the thread—took it as a tremendous insult, even though the product manager was trying to inject a little levity. The PM should have linked to the commercial:
Distance, culture, and time zone problems make communicating hard in remote teams.
We have spent trillions of electrons this year on how to lead remote teams, so I’ll stick to something more prosaic: communication tools.
Slack, Jira, Microsoft Teams, Zoom – the communication tools you use matter but not as much as how your organization uses them.
Tweet
Over the past twelve years I have worked in four different distributed organizations that ranged from bootstrapped start-ups to huge, traditional corporations. I often found a lack of alignment caused by the inconsistent use of communication tools. Solving this required us to be way more intentional in how we communicate.
To communicate as a distributed team, you need to establish norms. Brainstorm norms as a group, winnow them down as leaders, align on them as a team and then communicate them out. When it works you get a written, jointly owned artifact that you can use to orient new team members and bolster the confidence of quieter team members. Above all leaders must model the right behavior; they need to adhere to the norms. When the rules are set up front, collaboratively, and followed by leaders, then the team is more likely to follow them. This eliminates the need for awkward corrections down the road.
So here’s my list of communication tools and how to use them.
Good for: Well, really, everything. As my friend Dave Feldman put it, “all other things equal, face-to-face meetings are better for everything because they have really high emotional bandwidth. But they’re the most interruptive, hardest to coordinate, and can be wasteful of people’s time.” Face-to-face is the best way to have hard conversations. Generative exercises. Team building. Anything that started not face-to-face and got heated. Things that require dialogue. Design meetings, planning meetings, betting tables.
Bad for: Status meetings. Networking conversations. Routine discussions.
Good for: Things that require synchronous attention but less dialogue. Structured discussions. Ideally, decisions that are less contentious. Prioritization exercises, design reviews, presentations of an analysis. Collaboration where a small number of people are presenting to a larger number of people.
Also good for: Not getting Covid 19. Right now it’s usually as close as we get to face-to-face. That said, video conferences have unavoidable limitations. We need to acknowledge them and mitigate.
Bad for: Energy levels. Video is draining. Use it sparingly. Find other ways to ensure that all participants are “present” and attentive. You know your team is healthy if it is present without being threatened by the green camera LED. If the meeting is small enough be sure to “go around the room” once and check in with each attendee, one by one, asking them what they have to add. Make sure that everyone feels heard.
Good for: Asynchronous, text-heavy communication. Wide distribution groups. Not time sensitive. Easy to read, so long as you write them carefully and put important content high in the email body. Things that people may want to consider and respond to. Monthly status updates to investors or senior leadership. Summaries of organizational changes (after those organizational changes have been announced synchronously to those directly affected).
Major 🔑: Put the most important information into the subject line and lede (more here). Make sure your emails are easy to read on mobile devices.
Good for: Getting a hold of someone after hours or while they are away from keyboard (e.g. in transit) and you need a dialogue. Always text first.
Patterns: Establish clear norms as to when people are / are not expected to be available on Slack; use threads to allow people to follow topics. Be aware of timezones; encourage use of Away messages. Train people on how to manage their notifications, alert settings, and Do Not Disturb. Don’t assume they’ll figure it out, or will feel comfortable turning it off. Use public channels, created for topics, teams and projects.
Anti-patterns: This is going to need a bulleted list.
Finally, chat can make bad managers worse. The combination of immediacy, reach, lack of nuance, and lack of non-verbal feedback can encourage some incredibly stupid behaviour that can be hard to undo.
Good for: Documentation, references, working documents; Version tracking important; High-fidelity (fancy formatting) not important
Bad for: Things that imply sequence. Workflows, etc. Require a lot of maintenance. Prone to getting out of date.
Good for: Things that have a deadline, dependencies, multi-step tasks, etc.
Bad for: Requirements, specs, designs – anything more than a few lines shouldn’t live in the bug, but should be linked out to a doc from the bug. Bug tracking tools are hell to search.
This includes Google/Word Docs, Slides/Powerpoint/Keynote and Sheets/Excel
Good for: Narratives or analyses; work that needs a lot of formatting and high fidelity. Work that changes slowly. Read-aheads for meetings. Group editing. Tools like Google Docs and Dropbox Paper have great commenting tools although Confluence has come a long way.
Bad for: Requirements. Reference documents. Documents that are expected to be maintained over a long period of time. After a project is over things like Word Documents are typically lost in a shared folder, hidden away, and never used again.
Major 🔑: Tuck documents away neatly after a project. If they are no longer being updated, add a prominent note to that effect in the top of the document, with a link to other relevant, and more recent information (i.e. a Wiki). Manage the zombies.
Avoid holy wars. Focus on picking the smallest number of tools that cover the greatest number of use cases, weighted by importance. Don’t let the perfect be the enemy of the good, but be aware of tools with high switching costs (bug tracking, chat) and be careful.
Remember that the process is as important as the outcome. You want to have broad support for the list so that influential team members feel ownership and will actively help to improve communication. That support should be backed up by top-down direction. And finally, don’t be cheap. Buy the right version of the tools and spend the money needed to get them configured and maintained property. If well paid software engineers are cutting and pasting tickets between bug tracking systems, I will find you.
Above all, expect change. How an organization communicates will change. Plan for it.
In a distributed organization, leading well requires intentional communication. Intentional communication requires the effective, consistent use of digital tools. To achieve this:
One last thing. Communication is good, but too much communication is not a healthy signal. People need time to think, plan and work. They should not be thrashing endlessly between Slack, Jira and Docs. So make sure that people know how to protect their time and are able to set boundaries, get work done, and feel as though they are accomplishing things.
Special thanks to my friends Michael Masouras and Dave Feldman for their ideas and their thoughtful help polishing this. Updated with ideas and edits from Dave on September 7, 2020.
Since leaving Borealis I’ve spent time some time getting to know “no code.” Low Code Application Platforms (LCAPs), or “low” or “no code,” “No Code,” seem to have broken through. The promise is that non-programmers can point-and-click their way through building mobile/web Apps and deploy them straight to Google Play, the App Store, or a corporate app store. Last November Gartner assumed that “by 2024, three-quarters of large enterprises will be using at least four low-code development tools for both IT application development and citizen development initiatives” and “low-code application development will be responsible for more than 65% of application development activity.” Stuff is happening (waves hands).
We’ve been promised programming for non-programmers since electronic computers were invented. Just Google “4GL” or read the history of PowerBuilder. That said, as with most technologies a big enough difference in degree generates a difference in kind. So what’s different now is:
It’s now “normal” to have a developer deploy live code to production with a click. Ten years ago you’d still expect to have a sysadmin setup physical machines, flow code through a staging environment, whatever. Now, some 22 year old makes a commit, it runs through automated testing, and minutes later the code is live in multiple data centres for some or all end users. Magic.
Companies now, primarily interact with customers through digital channels. Banks, of course, still have branches, and retailers still have physical stores, but these are really just meatspace user interfaces that sit on top of software. Software ate the world. So anyone who works in a large organization who has some customer responsibility is now responsible for software, regardless of their stated job description.
Thanks to cloud, and some mind-numbing-but-vital data fabric building many “citizen developers” can now read-and-write important data in more or less real-time. If you layer in human-in-the-loop stuff like Figure 8, would-be developers are one click away from any bit of data that they could possibly need to build an app.
She who can get the most done, in the least amount of time, and claim the most credit will win. Low-Code lets you move fast as it takes the availability of specialized, highly-skilled specialized technical labour off the critical path.
Oh you’ll still need specialized labour — people with a formal computing background who can untangle spaghetti code and keep user data out of unit 61398 but low-code lets you, as a middling business leader, move fast and break things (cringe). The time to proof value can be way shorter than with traditional programming — and in many cases you can throw dozens of 22 year olds from $consultingFirm to show traction and secure your bonus. Then you can get a proper budget in the next annual cycle, hire specialists, and build your empire.
If you’re a BigCo manager you don’t need approval or budget to start a low-code project; you just need access to whichever LCAP(s) your IT and security people have approved. You can cobble things together with existing staff, get a bit of traction (or at least make a sexy demo) and then marshal resources for a real launch. It’s Lean Startup for Enterprise (TM).
The point is, there are technical (cloud, digital first) and business (agility in the face of rigidity) factors that make Low-Code possible, and logical. It holds at least some promise of moving companies into a state of glorious Permanent (Digital) Revolution.
A year ago I wrote that 2016 would be the year that consumer AI went mainstream. In one sense, I was wrong. The average consumer still doesn’t interact with an AI application on a typical day. Siri, Cortana, Google Home, and Allo are making inroads but still have small reach when compared to, say, Android or iOS as a whole.
Looking back, I realize that my angle of attack was wrong. “Consumer artificial intelligence” will not mostly be about putting AI in end-user devices. Siri, chatbots, and other natural language interfaces are a piece of the picture. However, the really interesting stuff is happening just below the surface.
Last week, The New York Times published a long, breezy piece about AI. It centered on the migration of Google Translate from “conventional software” to neural networks. The article is a wonderful ramble through the AI countryside and I highly recommend it.
What was done to Google Translate is a great illustration of where the action is with AI right now. Rather than transforming how we interact with technology, AI is, instead, first transforming what happens behind the scenes in applications like web search and machine translation. This is analogous to what happened with other, historic new technologies.
In the early steam age, manufacturing plants were still laid out in the same way that they were when they were powered by water wheels. When workplaces were electrified, machines were still placed as though they had to be connected to a steam engine and desks were left near large windows, and so on. It takes time to learn how to apply a new technology. But we are learning.
For the past three years, Shivon Zilis and the team at Bloomberg Beta have been mapping the Machine Intelligence Landscape. 2016 saw an explosion of startups applying artificial intelligence to specific business problems. This is because it is easy to estimate, capture, and charge for the value an AI solution creates for a business. It is also relatively easy to identify and process training data.
Because of this, AI is becoming ubiquitous in business software. As a result, two years from now you won’t talk about “AI SaaS companies,” or “AI technology companies selling to business” — you’ll just assume that every piece of software marketing to business incorporates AI appropriately, just as that software uses a relational database or runs on the internet.
We’ll have optimized the heck out of what’s inside the little boxes of a business (sales, marketing, etc.) and will move on to the interesting stuff — creating new things that couldn’t happen without AI, and that blow up the boxes altogether.
This is where things get interesting; what happens when these new companies — with their blown up boxes, and their AI powered businesses — interact with consumers, directly through their devices? We were given a glimpse of this with Amazon Go a few weeks ago — a grocery store that uses machine vision and AI, along with other technologies, to end the checkout line. The thin end of the wedge were the Google Now and iPhone prompts that tell you when you need to leave to get to your best appointment, or warn you about a transit delay.
If we take a step further back — what do we, as consumers, ultimately want AI to do? Clearly, automating routine work and improving our safety is important. But then what? The answer is definitely not “generate more notifications on my phone.” Or “give me another awful pseudo-human to talk to,” one that’s just as frustrating as the typical Comcast customer-service representative.
So now we truly begin to apply AI to the long project of connecting humans with the things that they value. The things they need to complete their day, raise their families, and run their lives. We need to create new connections from human needs, across time and space, going deep into the organizations that can satisfy those needs. Re-think our businesses, re-think our lives, embrace what is possible, and blow up the little boxes. To create revolutionary change with accelerating, incremental change.
In October of 1994, Netscape released Netscape Navigator 1.0, the first commercial web browser. Over the next decade, the web went mainstream as it became increasingly usable.
In October of 2011, Apple announced that Siri — the mobile personal assistant it acquired in 2010 — would ship on the iPhone 4S. Siri has continued to improve, as have Google Now, Amazon Echo, and a host of other solutions. 2016 will be the year that consumer AI goes mainstream.
For twenty years voice recognition had been “the future.” While Watson and Wolfram Alpha captured the attention of the press, they both had negligible impact on consumers. While the world obsessed over screen size and Angry Birds, Siri kept getting better.
With WatchOS2 and an iPhone 6, Siri finally feels usable. I find myself tapping the phone less, and talking to my wrist more. How did this happen, and what can it tell us about the near future?
Siri is now the main way I handle use cases where I can articulate a clear question (“what is the weather tomorrow?”) or command (“remind me to buy milk tonight”). While I emphasize ‘clear question’, the future will likely behold Siri’s ability to handle increasingly complex natural language questions that deliver an optimal solution to a problem. For example, “Given the tastes of my dinner guests, what meals should I prepare?”
The ecosystem around Apple’s AI implementation is strengthening every day. Developers are exposing more and more functionality to Siri through the Search API. There are, however, bottlenecks imposed by Apple’s privacy policies that prevent it from having access to the rich user-generated datasets it helps create.
And by “kids” I don’t mean “millennials,” I mean the toddlers running about my house. The older one now orders Siri around. She expects to be able to talk to a computer. Remember when kindergarteners suddenly expected all screens to be touch sensitive? Generational shifts like these are great leading indicators of what’s next.
By the end of next year, consumer AI will be everywhere. Operating systems will expose key features of installed applications or replace them altogether.
Facebook M, Operator, WhatsApp, WeChat, Slack, Kik, and every service with a natural language interface is, at its heart, an AI platform. Algorithms can either establish a direct rapport with users or monitor what is being said in a privacy-sensitive way, collecting intelligence and offering assistance.
For example, Apple will integrate Siri within Messages and Mail, Emu-style.
Consumer AI will continue to improve by a factor of two every two years. This sustained, exponential improvement will bring startling results. Incremental projects will overtake attempts at big-bang disruption. Think less “Google Self-Driving Car” and more “Tesla Autopilot.”
Incumbent players will accelerate the acquisition of consumer AI applications to bolster their teams, and to defend their positions. Nobody wants to have done to them what Google did to Yahoo.
Updated on December 16, 2016 based on feedback from Nathan Benaich and Ahmad Nassri, as well as what I learned at the Machine Learning and the Market for Intelligence hosted by the @creativedlab and the University of Toronto. Inspired in part by Shivon Zilis’s excellent work in the space.