If You Can’t Go Through the Door, Break the Wall

Today I had one of the toughest chats I can remember, trying to get through to an engineer in my team.

One of the challenges a new PM faces is building rapport with the engineers. This might be a bigger challenge if those engineers didn’t worked with a PM before, or  did, but got burned by following bad product decisions, or building failed products.

I’m not sure what’s the situation I’ve entered, but I do need to build trust. I’m not worried, though. I’ve been in that situation many times before; I’ll be OK. I’m also a CS graduate, and can talk the talk if needed. However, with this guy things are a bit different.

He’s quite, uninterested, avoiding communication and gives off bad vibe. But, he’s not someone you can dismiss. He’s experienced, built a lot of the current product, and has a lot of respect. In fact he was offered the tech lead role, and refused.

I need to get through to him. Not only because I want to good communication within the team, but because I think he can, and should, take instrumental role in what we’re building.

And so I took it as this week’s project. At first, I scheduled time with him, to pique his brain on one of the products we’re planning to build. As the most experienced person, I wanted to get his opinion. I thought that consulting with him, and involving him during the ideation phase will get him exited, and open him. I was wrong. He was very brief in his remarks, and didn’t share any independent thought.

I tried to have a follow-up session, but he invited the rest of the team. He’s not the “tap on the shoulder” kind of guy, and through the entire week I tried different means of communication. Non worked. Short slack messages don’t seem like a long-term strategy to develop trust.

Today I decided to take a bolder move. I saw him taking a solitary coffee break, and invited myself to sit next to him.

Again I shared what we’re planning to build. This time, though, I didn’t share my vision. Instead, I asked that he will define one, and take the lead on driving it; I will help were needed. He said he already refused to take any leadership role. 30 minutes later, and many more failed attempts to engage with him, I took a step back and asked: “what will make you excited?” – his answer, I didn’t expect – “Nothing. I’m not doing work to be happy. I’m committed, and will complete all my tasks, but I want to do the boring stuff. I want to help my team build faster. I don’t want to be acknowledged, and I don’t want to present and demo stuff.”

How do I deal with that answer…?

I felt my ammunition depleted. I couldn’t find an API to him. And so with no other options, I turned to the one that’s the hardest, but that, which can’t fail – the truth.

And here’s the essence of what I then said:

“Fine, don’t take a leadership role; don’t take more responsibility; don’t build the fun stuff; for all it worth, don’t even be happy. But, we’re at the same team, and we must communicate. Right now we don’t. I can’t tell if it’s personal – I think it’s not – but I can’t find a way to work together. I’m very uncomfortable, and can’t see how this situation is sustainable. There’s no option for us but to figure things out.”

That wasn’t fun. I was stressed, and emotionally drained after that. Not because I said uneasy things, but because throughout the entire time he didn’t make a single node. Nor did he react. Nor did a single muscle in his face moved. I had no clue if what I’ve said affected him. Did he listen? did he care? was he mad? annoyed? It felt like flying in the dark, with no instruments to guide me.

Eventually he did respond. But his response had nothing to do with what I’ve just said. He simply shared a positive progress on a feature he’ve been working on. To me, though, what he said sounded more like “gotcha, let me digest what you’ve just said.” I was encouraged.

And now I need to wait. I don’t know if that tactic will work, but at least I’m at peace – I was transparent and honest, and didn’t let awkwardness and uncomfortableness control me.

Product Leader’s First 100 Days Plan

Asking a candidate for a 100 days plan apparently becomes the norm. As for me, I’m part of this norm. During the final steps in my recruiting process to Button, I created such 30/60/90 days plan.

“By failing to prepare, you are preparing to fail.”
Benjamin Franklin

Drafting this plan was one of the better ways I could prepare for the new role. It helped me in both setting better expectations for myself, as for what will be expected of me in those first 3 months, and set a cadence for progress and execution, right from day one.

As I’m about to hit the first 30 days mark, its time to revisit this plan, and see where I stand against it. Doing so, I’ve figured it might be a good idea to share my plan. Hopefully someone finds it helpful, and\or share their own ideas.

Continue reading Product Leader’s First 100 Days Plan

Time To Move On

Tomorrow will be my last day at Outbrain. Here’s my goodbye note.

Hi all,

Tomorrow will be my last day at Outbrain.

I’d like to thank you for being such an important part of my life for the last (almost) 5 years.

As there’s rarely a single feature that makes a killer product, but rather a combination of capabilities, orchestrated in just the right way to help a user solve a need, the same is true now, when I look to thank the people who’ve shaped my experience at Outbrain. I can’t mention just one or several people, because it’s the collection of you all, creating a truly unique culture and atmosphere that makes that company so special.

I’m thankful for the privilege to work with and learn from you!

Keeping in touch with many Outbrain alumni, I know that I can leave Outbrain, but the Outbrain will never leave me 🙂

So please don’t hesitate to keep in touch and reach out! I’ll be a click of a button away.

 

Thanks. Yaniv

 

What I’ve Been Working On Lately – Recap

I didn’t write for a while1. And it’s not that nothing had happened. The opposite… so much learning and new experiences that I didn’t find the time to log. No, it’s not a lack of time, but rather not internalizing how important it is to stop, asses and capture what I’m learning as I go.

But it’s better done late than never2. So here’s a list of projects I’ve been working on, in no particular order, followed by the list of new skills I’ve learned.

Projects

Outbrain News Brief for Alexa

This is a simple skill for Alexa, that reads summaries of top news stories. So as a user, you have to add this skill to Alexa, and can then ask Alexa “what’s in the news”. Alexa then calls a web service, which I developed. This web-service calls Outbrain and ask for the latest news (using the Sphere platform). It then sends the articles it gets from Outbrain to a summation service (Agolo), and returns them back to Alexa, which then reads the summaries to the user.

Outbrain skill for Alexa

Similar to the skill above3, but with more functionality. This is actually the initial stage of a conversational experience, where users will be able to interact with Alexa to get personalized news stories. So users will be able to guide Alexa through conversation to article from site, or on topic they are interested in, or discover new content based on their interest graph. Here’s a simple sequence diagram that illustrating the current user flow:

sequence-diagram.jpg

My Clipboard

alexa-clipboard-icon.png

Now that’s where things become more interesting, working on my stuff… This is a skill for Alexa that serves as your clipboard. You can say “Alexa, ask my clipboard to remember 212 322 4432” and she’ll remember this phone number for you. “Alexa ask my clipboard what’s in my clipboard” (yeah, redundant, I know…) and she’ll repeat the phone number for you.

Why is it helpful? imaging that you’re on the phone and can’t take a note, or fine a pen to write one down… let Alexa handling it for you… But if you think about a smarter clipboard, one that takes keys and values, you can do much more interesting stuff. For example, ask Alexa to remember that you put your passport in the top drawer. Later on, you can ask her where did I put the passport. But that’s a longer term functionality… I first need to finish the current iteration and get it public (it’s not at the time of writing this…).

Baby Weatr

Artboard_1v2.png

Baby Weatr is a Facebook Messenger app4 that helps parents decide how to dress their kids appropriately for the weather. Well, it designed around my lack of any skill to translate weather into baby wear. So to make sure I’m not endangering our daughter, I decided to build this decision support app.

I’m working on it together with a friend, but this was an opportunity to tie together a lot of the things that I like, and always wanted to use more, such as coding and design5. Initially we tried to outsource the design work, but working with chip freelancers produced deliverable at the quality we paid for, meaning bad. On the other hand, hiring capable designer is expensive. So, I decided to hone on the opportunity to connect with the right side of my brain, and design the first version of the app myself.

Baby Wear is live on Messenger now, so if you need help dressing up your baby – I would love to get your feedback…

Try Baby Weatr

Dlog

While working on the projects above, I did quite a bit of coding. What’s more, this time I coded almost professionally (some of  what I built is going to be used by my company…).

I found that I need to log what I’m doing, so I can backtrack if needed, and won’t make the same mistakes twice. I found that it also accelerated my learning (similar to how writing does…). Git commit, or inline commenting weren’t enough, as I wanted to capture not only the outcome of my thinking, research, trail and error and refactoring. But rather, I wanted to put capture my deliberations and place bread crumbs as I go. I wanted to be able to read back and understand why I made certain decisions. For example, why I selected one data structure and not another, how do I start a flask project, and how do I run a flask app and make it reload every time I make a change.

And so I started to maintain a file called development log, or dlog. I keep it open as part of my workspace and include it in my git repository. Here’s an example of how it looks like (the dlog is in the bottom right quadrant):

I thought that might be something that other developers find helpful, and put it on a separate blog (here). I’m contemplating with the idea of opening this blog for others, with the assumption that if many developers log their process, it will serve as a new form of knowledge repository, stack-overflow extension, or companion to readme documentation.

Things I’ve learned

Chatbots Messenger Apps

Well that’s not new for me… my team is dedicated to messaging apps for awhile now. I think I mentioned before that we’re responsible for the CNN app on Messenger and Kik, as well as for the apps of other notable publishers. What I did learn is how to view these type of apps as the best way to develop an MVP, and how you can build a full experience with building blocks, and minimum amount of code, or back-end services.

I’m used Chatfuel6 as the content management system for the Baby Weatr app, and love the way I can control the behavior of the app, and build it to match the way I’m thinking about flows. Here’s how the Baby Weatr app looks like within Chatfuel: chatfuel-baby-weatr.png

Assistant devices

Assisted device are the conversational version of messenger apps. Here, a user can interact with a device with voice, rather than with text. I’ve been working with Alexa on the skills I mentioned above. I also experimented with Google Home, and their api.ai platform.

I think that these experiences are the real revolution in AI and conversational design, and messaging apps, or chatbots are just a stop in the way. I suspect that FB is going to kill their (less than) year old platform, and bet on live video, VR and maybe voice recognition. Right now the messenger apps are like a ghost town, Much more to say about that, and about what messenger apps are good for (hint: MVP). I’ll keep that to another post.

Python

Python isn’t new to me. I use it occasionally to write scripts to streamline my workflows, or automate tedious manual work. (Automate the Boring Stuff with Python was the book that got me started with python. Highly recommended.)

But this is the first time I’ve used python for real products and services. Using it more intensely I’ve learned how friendly the language is, and how well it fits the way I think about code. I wrote so much, that even google took note, and invited my to the google coding challenge7, mistaking me with a real developer :-). google-code-challange.jpg

Flask

That’s the backbone of almost every one of the projects I listed above. Flask, and it’s Alexa extension – Flask-Ask, are super easy and intuitive packages that help creating web services. I created a template (TODO: push this template to GitHub) and I use it as a starting point with new projects.

Design

Now, that were my passion is at these days. I’ve just finished a 12h Illustrator course in Udemy, and in the middle of a… Illustrator 4 Fashion class. All I think about are shapes and colors, and how I can make them in Adobe Illustrator, my hands are glued to the new Intous Pro I’ve just bought.

In a way, I’m where I wanted to be when I did my bachelor degree – code and design (I graduated as a software engineer, with focus on machine learning and… graphic design).

But in my journey to Illustrator I actually made two stopped, in Inkscape and Sketch. I started with Inkscape, which is great. It’s easy to learn and very powerful. What I like most with Inkscape is the control over the creation and modification of paths, which is way easier and more intuitive than Sketch and Illustrator. I did most of the clothing items to Baby Weatr using it, and posted samples of these designs in a previous post.

But Inkscape lacks in layout and layer management. I also missed smart guides, which makes the interface design much controllable. And so I’ve started to learn Sketch.

I love Sketch’s workflow, as well as the way it lets me organize design assets alongside my artboards. But, it’s not a replacement to Inkscape when it comes to actual illustrations. What I ended up doing is creating the cloth items’ illustrations in Inkscape, and importing them into Sketch, where I did the layout created the sets of outfits.

Here’s how the Baby Weatr project looks like in Sketch: sketch-outfits-page.png sketch-cloth-items.png

And then, as I drawn deeper and deeper into design, I’ve started to learn more about Adobe Illustrator. I tried it while using Inkscape and Sketch, but it seemed too complex, and inaccessible to me. But the more complex it seemed, the more attracted to it I became (no wonder I use Emacs…). When I finished all the clothing sets I needed for the beta launch of Baby Weatr, I decided to get serious and learn Illustrator. After all, it is the tool for designers…

And as I mentioned, 150 episodes later that took 15 hours and span over 3 course, I’m at a point where I feel comfortable with the tool, and starting to do art and design work in it.

Phooo, there was a lot of catching up I needed to do… but it feels good to look at that list and appreciate all the things that I had the chance to learn and experiment with.

Footnotes:

1

And now, just to make sure this post is going to get published, I used one of my hacks, and put it on scheduled publishing…

2

And no, there’s no new year resolution involved in this writing. I don’t like this practice, and don’t set those resolutions…

3

This skill is still in development, so not public and can’t be added to Alexa yet.

4

aka chatbot, but I denounce the term, because it’s lame…

5

I graduated as a software engineer, with a focus on machine learning and… graphic design.

6

They were actually a fierce competitor when we tried to get the CNN project 🙂

7

I completed several stages, but didn’t go all the way, because I had other things to work on, and I’m not going to make a career switch…

Your App Is Burried In A Folder – Make Its Icon Stand Out

Meetup has finally updated its mobile app. More than that, it went through a complete re-branding, and as part of it also redesigned the icon of its mobile app.

From the look of the new icon, it seems that Meetup’s designers assumed their app sits front and center in their users’ devices. I hesitate that’s the case.

 It’s increasingly difficult for smaller publishers/brands to break through — even with downloaded apps — because of folders (being buried) .. — marketingland.com

I’m one of those users… while I use the Meetup app quite often, to stay in touch and communicate with members of the groups I lead, it’s not one of the few apps I spend most of my time on. Therefore Meetup, like 98% of my apps, lives in a folder.

As a foldered app, it should have an icon that’s visually distinguishable, and that stands out with every pixel, otherwise users will ignore the app and won’t use it. Meetup’s new icon is anything but standing out. On the contrary – it blends with the rest of the icons and lacks identity.

Take a look at Meetup’s icon before and after:

meetup-new-icon.png

Figure 1: Left – before, Right – after. In both images, it’s in the top-right folder, the bottom-right icon

The previous icon, while not optimized for mobile – having to squeeze the name in the small icon – had some color contrast to it, which made it recognizable.

Your App Is Not Special

Don’t assume users care about your app; they don’t. After downloading it, they are likely to either delete it, or throw it into a folder. The least you can do is plan for the latter, and design an icon that’s unique, and can be recognized in any size.

Take a look again at the screenshots above – which icons do better job at grabbing your attention, even when placed within a folder1?

Footnotes:

1

My pick would be the Workflow’s icon (same folder as Meetup, bottom-left corner), as well as Spotify (right image, top-left folder, top-left icon) and Overcast (right image, top-left folder, mid-left icon).

An Inconsistent User Experience in iOS

When it comes to user experience, I’m a big fan of consistent design, which gives users confidence that their actions will lead to an expected outcome. When users know what to expect, they are open to experimentations; thay are not afraid to explore wider set of features, and try out new capabilities.

When there’s no consistency, when the same function gets different names or labels, or when it shows in different places, then users get confused. And when users get confused, they’re reluctant to try anything that’s not within their immediate need. Here’s an example for such confusion, which I’ve just experienced on my iPhone, when trying to share an image with a friend.

That’s the flow I went through:

  • Took a screenshot on my iPhone 
  • Went to the iOS photos app
  • Selected the screenshot I’ve just taken
  • Clicked the share icon
  • Selected to share via Messages
  • Selected the friend I wanted to share the screenshot with
  • Clicked send

Or have I…? when I clicked what I thought was send, the Messages’ screen closed, leaving me wondering if the image was actually sent. I repeated the flow, and just before clicking the “send” button1 paused to read its label. Hmm… it says “cancel”. That’s weird. I’m pretty sure it should say “send”. But what made me think that that’s where the “send” button is? was there another app that primed me with this expectation?

There is, off course. It’s called Mail.

In Mail, the send button dominates the top-right corner of the screen. Now, since I send too many emails every day, way more than I share photos, my brain expect the “send” button, in whatever app I’m in, to show at the top-right.

ios-inconsistency-ux.png

Figure 1: To the left is the Messages. On the right – the Mail app. Note the different buttons on the top-right corner of each of those apps.

I love those moment of self awareness, which allow me to test some of my own assumptions…

Footnotes:

1

It’s a little hard for me to call it button, because nothing make it stand out from its background, like you would have expect a button. Is it possible that my brains is still wired in the pseudo physical, skeuomorphism, design…?

Self User Testing

OK, so I’m retracting from agreeing that descriptions are useless. I just had an experience that proved that wrong.

Well, some context will be helpful… let me step back and explain. Yesterday we had a heated discussion in the team about the usefulness of showing a description of a post inside a recommendation tile in our chatbot. Take a look at the screenshot bellow. This is how we currently display recommendations in our Facebook Messenger bot:

fb-chatbot-ctas.PNG

Each recommendation comes with a set of metadata: thumbnail, title, source, and description. The bot.outbrain.com is an ugly appendage forced by Facebook. Then there are the actions you can take on a recommendation. Clicking on the thumbnail will open the article in a webview. Summary will return an auto generated summary1, stash will save it for later, and #{topic} will return more recommendations from the same topic.

You’ll notice that the description in this example (taken from the article page) isn’t great. It’s trimmed, and do little to explain what this story is about. Essentially, it doesn’t help me taking a decision to read or pass on this recommendation.

One of the ideas we came up with is replacing the description with the reason the user see a specific recommendation. We call this feature “Amplify the WHY”. So in the example the image above,  I’m probably seeing this story because I read a lot about science and astronomy. So the “WHY” in this case might read something like “because you’re interested in astronomy”.

It would have been nice to show both description and the “WHY”, but we have limited real-estate to work in, and need to choose one of them.

My team was adamant that we should drop the description and go with the “WHY”. At first, I was reluctant to agree. “I want to see data first”, I said. “Let’s run AB testing”. “Well, we don’t have users yet, so AB testing isn’t relevant at that point. Also, it is clear that ‘amplifying the WHY’ is so much better than showing a crappy description that we should take this as the baseline” was the reply I got. How can you argue with such compelling reasoning…

Now, circling back to my opening, I’m taking my agreement back.

I woke up at 7am today and wanted to read about the results of the debate yesterday night. I didn’t know where I can find this information, quickly and succinctly2. I thought about the CNN chatbot, but CNN’s top stories are posted only at about 9am. Then I figure, let’s try to see if I can find something relevant in our bot.

I typed “hi”, and (to my surprise) the first story I got was right on point –

fb-chatbot-election-debate.PNG

Then I browsed a little more, and suddenly took notice that in any recommendation, which has a relevant title, I skim the description for more context. I also realized that I don’t look for completeness or quality; just few more words that will give better idea what the article is about.

“WHY” I get a recommendation, and why it’s important to me wasn’t relevant in the context I were in – checking the news, the objective news, not that that that’s in my “bubble”.

Summary wasn’t relevant in that usecases either, because much like clicking to read the story, it means “choosing” and focusing on one article, whereas I was still at the decision making stage.

So, what I’ve learned from observing myself (and in that rare instance, I acted as a user, rather than a stakeholder) is that description does have value, and in certain usecased, such as browsing the news, I need objective hooks. Description, in that case, and not personalized reason, were more relevant.

Definitely not representative experience, but one that makes me rethink what should be the baseline. And whatever the baseline is, we should put it to test.

Footnotes:

1

Works pretty neatly. Here’s the summary for this article in the picture: “On Tuesday, thousands of people stampeded into a lecture hall in Guadalajara, Mexico, to hear SpaceX CEO Elon Musk talk about how he wants to colonize Mars. Another question is how — and if — Musk plans to prevent Earth microbes from contaminating Mars, and Mars microbes (if there are any) from contaminating Earth.”

2

I don’t go to sites to look for news anymore, and rarly google for news. And since the extinction of Zite, I now realize, I have no idea where I get my news from…

Google Allo – First Impression

Yesterday I installed the new Google Allo and gave it a first try. My team at Outbrain is responsible building chatbot CMS for publishers. So I was interested to learn about some of the decision made in Allo, and compare them with what we’ve learned over the last 6 month powering the CNN bots on Facebook Messenger and Kik.

User on-boarding

I downloaded the app, installed it, but then deleted it in the middle of the on-boarding. Why? because Google are being overly transparent. Why do they make such a point that they are going to send my contact list to their cloud now and then? there must be some evil reason for that…

Allo-onboarding-1.png

So, I deleted the app. But then I thought to myself, “wait, you’re using Google Contacts, and your contacts are already syncing with google. Not periodically, but all the time, in real-time…” I felt stupid, downloaded the app again and completed the on-boarding. And I won’t say I felt better when the first few prompts from Allo kept pushing on that sharing thing, as if trying to tell me that I’ll be better not use it, if I want keeping private anything

Allo-onboarding-2.PNG

To sum things up, the on-boarding experience could have done more to instill trust and make me more comfortable. Right now I’m not, and although he is a bit more of a privacy snob than I am, Snowden already made a point about the lack of privacy in Allo.

Content experience

  • Typed “top stories” – I got relatively fresh stories, but definitely not important ones.
  • They put the publish time. Seeing that a story published 37 minutes ago give confidence that they deliver news as they happen.
  • The stories carousel is clean and simple, but I would have liked to be able to take action on a specific story. This is possible in Facebook Messenger using the ‘Structured Message’ template. Articles’ recommendations in Allo feel temporary, since you can’t do much to engage with them other than read when you see them. Adding an option to see a summary of an article, save it for later or get more similar stories might give users a better sense of control over the experience and the stories they are seeing.
  • Google seems to think of Allo as a new interface for search, which makes sense for Google, but make Allo feels like a browser. When searching for something, the first quick reply is “Google results”, which once tapped opens the browser and search for your input. I didn’t like that it takes me out of the app.
  • The content in Allo doesn’t feel native. Rather, it feels like a patch, a cut and paste from the browser. Again, makes me feel that Allo is just another browser.

Chat-flow and experience

  • There are no dead ends. Even when chatting with friends, you always have quick replies available. That’s great.
  • There are ‘like’ and ‘dislike’ emoji’s at the last two positions of every set of quick replies. It didn’t make sense to me. As a user, I don’t know what they mean, hence probably won’t use them.

AI

  • That’s the part that surprised me the most. Allo tries to be smart. It tries as much as it can to be non scriptive. Say “hi” and every time it will answer with something different. The first time I typed “hi”, I got the entry point experience, namely the option that I have to interact with the bot. Later, when I wanted to get to the same entry point, I typed “hi” again. This time, though, Allo tried to get into conversation with me. After few more greeting inputs that got me no where, I gave up and typed what I was looking for.
  • At that early stage, when users aren’t educated enough on the conversational design, and are accustomed to more deterministic experiences, trying to be smart is wrong. It’s like the early days of the iPhone – the skeuomorphism design helped users get accustomed to use it, through the icons that imitated physical objects. Once they got educated, more than 8 years latter, the flat design was introduced.

To sum things up, my overall impression is ahh. Yeah, it’s cool to play with Allo and see how well it handles natural language, but it’s no different than google search. In fact, it feels too much like google search, which is bit outdated. But than again, I’m writing this post with Emacs…

Hostile Lead Generation

In the last few days I’ve been getting daily emails from TWC, promoting their “Time Warner Cable Business Class” service. I don’t know anything about this service, and since I don’t run a business, its irrelevant to me. 

Until today I simply deleted those emails, but today I got annoyed, and made an effort to indicate it by unsubscribing from thier mailing list. However, the unsubscribe flow made me think that TWC isn’t really deterred by requests to unsubscribe. In fact, it seems it’s using it as another user acquisition channel.
And I think I cracked the protocol of this funnel:

Marketing emails

Send daily emails to users whose emails we get buy.
img

Keep sending those emails until the user respond, by clicking on the unsubscribe link, or selecting the gmail “report spam & unsubscribe” button.
img

Clean user information through an unsubscribe form

When a user clicks the unsubscribe link in the email, we have a precious opportunity to make sure the information we have about this user is correct.

img

When a user submit the unsubscribe form, we should update our database with the new information.

User is redirected to unsubscribe form

After we get a successful respond from our servers, redirect the user to the TWC homepage.

img

We assume (or hope) that when the user submit the form, she moves focus to another tab, rather than closing the one she’ve submitted the form in. If this assumption holds, then the user will have the TWC homepage waiting for her, and she’ll get to it in the near future.

User visit the TWC homepage

At some point, as we assumed, the user zap through open tabs and open the one that displays the TWC homepage. Great, we have a new lead! The user is visiting our site, meaning she’s interested in our service.

img

Hurry up to drop a cookie on her, and look for whatever information we can get on that cookie. Wait, we have her full name and email address!

Retarget potential leads

Let’s make sure we slice the bread while it’s still fresh, and find that user wherever she browse. This way we can nudge her just a little more, and try to get her to come back to our site and take another step toward conversion.

img

And wait, we have her email! that’s gold…

Well, no bad feelings for TWC. It’s just an amusing example of the absurdity of how user acquisition works.