What I’ve Been Working On Lately – Recap

I didn’t write for a while1. And it’s not that nothing had happened. The opposite… so much learning and new experiences that I didn’t find the time to log. No, it’s not a lack of time, but rather not internalizing how important it is to stop, asses and capture what I’m learning as I go.

But it’s better done late than never2. So here’s a list of projects I’ve been working on, in no particular order, followed by the list of new skills I’ve learned.

Projects

Outbrain News Brief for Alexa

This is a simple skill for Alexa, that reads summaries of top news stories. So as a user, you have to add this skill to Alexa, and can then ask Alexa “what’s in the news”. Alexa then calls a web service, which I developed. This web-service calls Outbrain and ask for the latest news (using the Sphere platform). It then sends the articles it gets from Outbrain to a summation service (Agolo), and returns them back to Alexa, which then reads the summaries to the user.

Outbrain skill for Alexa

Similar to the skill above3, but with more functionality. This is actually the initial stage of a conversational experience, where users will be able to interact with Alexa to get personalized news stories. So users will be able to guide Alexa through conversation to article from site, or on topic they are interested in, or discover new content based on their interest graph. Here’s a simple sequence diagram that illustrating the current user flow:

sequence-diagram.jpg

My Clipboard

alexa-clipboard-icon.png

Now that’s where things become more interesting, working on my stuff… This is a skill for Alexa that serves as your clipboard. You can say “Alexa, ask my clipboard to remember 212 322 4432” and she’ll remember this phone number for you. “Alexa ask my clipboard what’s in my clipboard” (yeah, redundant, I know…) and she’ll repeat the phone number for you.

Why is it helpful? imaging that you’re on the phone and can’t take a note, or fine a pen to write one down… let Alexa handling it for you… But if you think about a smarter clipboard, one that takes keys and values, you can do much more interesting stuff. For example, ask Alexa to remember that you put your passport in the top drawer. Later on, you can ask her where did I put the passport. But that’s a longer term functionality… I first need to finish the current iteration and get it public (it’s not at the time of writing this…).

Baby Weatr

Artboard_1v2.png

Baby Weatr is a Facebook Messenger app4 that helps parents decide how to dress their kids appropriately for the weather. Well, it designed around my lack of any skill to translate weather into baby wear. So to make sure I’m not endangering our daughter, I decided to build this decision support app.

I’m working on it together with a friend, but this was an opportunity to tie together a lot of the things that I like, and always wanted to use more, such as coding and design5. Initially we tried to outsource the design work, but working with chip freelancers produced deliverable at the quality we paid for, meaning bad. On the other hand, hiring capable designer is expensive. So, I decided to hone on the opportunity to connect with the right side of my brain, and design the first version of the app myself.

Baby Wear is live on Messenger now, so if you need help dressing up your baby – I would love to get your feedback…

Try Baby Weatr

Dlog

While working on the projects above, I did quite a bit of coding. What’s more, this time I coded almost professionally (some of  what I built is going to be used by my company…).

I found that I need to log what I’m doing, so I can backtrack if needed, and won’t make the same mistakes twice. I found that it also accelerated my learning (similar to how writing does…). Git commit, or inline commenting weren’t enough, as I wanted to capture not only the outcome of my thinking, research, trail and error and refactoring. But rather, I wanted to put capture my deliberations and place bread crumbs as I go. I wanted to be able to read back and understand why I made certain decisions. For example, why I selected one data structure and not another, how do I start a flask project, and how do I run a flask app and make it reload every time I make a change.

And so I started to maintain a file called development log, or dlog. I keep it open as part of my workspace and include it in my git repository. Here’s an example of how it looks like (the dlog is in the bottom right quadrant):

I thought that might be something that other developers find helpful, and put it on a separate blog (here). I’m contemplating with the idea of opening this blog for others, with the assumption that if many developers log their process, it will serve as a new form of knowledge repository, stack-overflow extension, or companion to readme documentation.

Things I’ve learned

Chatbots Messenger Apps

Well that’s not new for me… my team is dedicated to messaging apps for awhile now. I think I mentioned before that we’re responsible for the CNN app on Messenger and Kik, as well as for the apps of other notable publishers. What I did learn is how to view these type of apps as the best way to develop an MVP, and how you can build a full experience with building blocks, and minimum amount of code, or back-end services.

I’m used Chatfuel6 as the content management system for the Baby Weatr app, and love the way I can control the behavior of the app, and build it to match the way I’m thinking about flows. Here’s how the Baby Weatr app looks like within Chatfuel: chatfuel-baby-weatr.png

Assistant devices

Assisted device are the conversational version of messenger apps. Here, a user can interact with a device with voice, rather than with text. I’ve been working with Alexa on the skills I mentioned above. I also experimented with Google Home, and their api.ai platform.

I think that these experiences are the real revolution in AI and conversational design, and messaging apps, or chatbots are just a stop in the way. I suspect that FB is going to kill their (less than) year old platform, and bet on live video, VR and maybe voice recognition. Right now the messenger apps are like a ghost town, Much more to say about that, and about what messenger apps are good for (hint: MVP). I’ll keep that to another post.

Python

Python isn’t new to me. I use it occasionally to write scripts to streamline my workflows, or automate tedious manual work. (Automate the Boring Stuff with Python was the book that got me started with python. Highly recommended.)

But this is the first time I’ve used python for real products and services. Using it more intensely I’ve learned how friendly the language is, and how well it fits the way I think about code. I wrote so much, that even google took note, and invited my to the google coding challenge7, mistaking me with a real developer :-). google-code-challange.jpg

Flask

That’s the backbone of almost every one of the projects I listed above. Flask, and it’s Alexa extension – Flask-Ask, are super easy and intuitive packages that help creating web services. I created a template (TODO: push this template to GitHub) and I use it as a starting point with new projects.

Design

Now, that were my passion is at these days. I’ve just finished a 12h Illustrator course in Udemy, and in the middle of a… Illustrator 4 Fashion class. All I think about are shapes and colors, and how I can make them in Adobe Illustrator, my hands are glued to the new Intous Pro I’ve just bought.

In a way, I’m where I wanted to be when I did my bachelor degree – code and design (I graduated as a software engineer, with focus on machine learning and… graphic design).

But in my journey to Illustrator I actually made two stopped, in Inkscape and Sketch. I started with Inkscape, which is great. It’s easy to learn and very powerful. What I like most with Inkscape is the control over the creation and modification of paths, which is way easier and more intuitive than Sketch and Illustrator. I did most of the clothing items to Baby Weatr using it, and posted samples of these designs in a previous post.

But Inkscape lacks in layout and layer management. I also missed smart guides, which makes the interface design much controllable. And so I’ve started to learn Sketch.

I love Sketch’s workflow, as well as the way it lets me organize design assets alongside my artboards. But, it’s not a replacement to Inkscape when it comes to actual illustrations. What I ended up doing is creating the cloth items’ illustrations in Inkscape, and importing them into Sketch, where I did the layout created the sets of outfits.

Here’s how the Baby Weatr project looks like in Sketch: sketch-outfits-page.png sketch-cloth-items.png

And then, as I drawn deeper and deeper into design, I’ve started to learn more about Adobe Illustrator. I tried it while using Inkscape and Sketch, but it seemed too complex, and inaccessible to me. But the more complex it seemed, the more attracted to it I became (no wonder I use Emacs…). When I finished all the clothing sets I needed for the beta launch of Baby Weatr, I decided to get serious and learn Illustrator. After all, it is the tool for designers…

And as I mentioned, 150 episodes later that took 15 hours and span over 3 course, I’m at a point where I feel comfortable with the tool, and starting to do art and design work in it.

Phooo, there was a lot of catching up I needed to do… but it feels good to look at that list and appreciate all the things that I had the chance to learn and experiment with.

Footnotes:

1

And now, just to make sure this post is going to get published, I used one of my hacks, and put it on scheduled publishing…

2

And no, there’s no new year resolution involved in this writing. I don’t like this practice, and don’t set those resolutions…

3

This skill is still in development, so not public and can’t be added to Alexa yet.

4

aka chatbot, but I denounce the term, because it’s lame…

5

I graduated as a software engineer, with a focus on machine learning and… graphic design.

6

They were actually a fierce competitor when we tried to get the CNN project 🙂

7

I completed several stages, but didn’t go all the way, because I had other things to work on, and I’m not going to make a career switch…

Your App Is Burried In A Folder – Make Its Icon Stand Out

Meetup has finally updated its mobile app. More than that, it went through a complete re-branding, and as part of it also redesigned the icon of its mobile app.

From the look of the new icon, it seems that Meetup’s designers assumed their app sits front and center in their users’ devices. I hesitate that’s the case.

 It’s increasingly difficult for smaller publishers/brands to break through — even with downloaded apps — because of folders (being buried) .. — marketingland.com

I’m one of those users… while I use the Meetup app quite often, to stay in touch and communicate with members of the groups I lead, it’s not one of the few apps I spend most of my time on. Therefore Meetup, like 98% of my apps, lives in a folder.

As a foldered app, it should have an icon that’s visually distinguishable, and that stands out with every pixel, otherwise users will ignore the app and won’t use it. Meetup’s new icon is anything but standing out. On the contrary – it blends with the rest of the icons and lacks identity.

Take a look at Meetup’s icon before and after:

meetup-new-icon.png

Figure 1: Left – before, Right – after. In both images, it’s in the top-right folder, the bottom-right icon

The previous icon, while not optimized for mobile – having to squeeze the name in the small icon – had some color contrast to it, which made it recognizable.

Your App Is Not Special

Don’t assume users care about your app; they don’t. After downloading it, they are likely to either delete it, or throw it into a folder. The least you can do is plan for the latter, and design an icon that’s unique, and can be recognized in any size.

Take a look again at the screenshots above – which icons do better job at grabbing your attention, even when placed within a folder1?

Footnotes:

1

My pick would be the Workflow’s icon (same folder as Meetup, bottom-left corner), as well as Spotify (right image, top-left folder, top-left icon) and Overcast (right image, top-left folder, mid-left icon).

An Inconsistent User Experience in iOS

When it comes to user experience, I’m a big fan of consistent design, which gives users confidence that their actions will lead to an expected outcome. When users know what to expect, they are open to experimentations; thay are not afraid to explore wider set of features, and try out new capabilities.

When there’s no consistency, when the same function gets different names or labels, or when it shows in different places, then users get confused. And when users get confused, they’re reluctant to try anything that’s not within their immediate need. Here’s an example for such confusion, which I’ve just experienced on my iPhone, when trying to share an image with a friend.

That’s the flow I went through:

  • Took a screenshot on my iPhone 
  • Went to the iOS photos app
  • Selected the screenshot I’ve just taken
  • Clicked the share icon
  • Selected to share via Messages
  • Selected the friend I wanted to share the screenshot with
  • Clicked send

Or have I…? when I clicked what I thought was send, the Messages’ screen closed, leaving me wondering if the image was actually sent. I repeated the flow, and just before clicking the “send” button1 paused to read its label. Hmm… it says “cancel”. That’s weird. I’m pretty sure it should say “send”. But what made me think that that’s where the “send” button is? was there another app that primed me with this expectation?

There is, off course. It’s called Mail.

In Mail, the send button dominates the top-right corner of the screen. Now, since I send too many emails every day, way more than I share photos, my brain expect the “send” button, in whatever app I’m in, to show at the top-right.

ios-inconsistency-ux.png

Figure 1: To the left is the Messages. On the right – the Mail app. Note the different buttons on the top-right corner of each of those apps.

I love those moment of self awareness, which allow me to test some of my own assumptions…

Footnotes:

1

It’s a little hard for me to call it button, because nothing make it stand out from its background, like you would have expect a button. Is it possible that my brains is still wired in the pseudo physical, skeuomorphism, design…?

Self User Testing

OK, so I’m retracting from agreeing that descriptions are useless. I just had an experience that proved that wrong.

Well, some context will be helpful… let me step back and explain. Yesterday we had a heated discussion in the team about the usefulness of showing a description of a post inside a recommendation tile in our chatbot. Take a look at the screenshot bellow. This is how we currently display recommendations in our Facebook Messenger bot:

fb-chatbot-ctas.PNG

Each recommendation comes with a set of metadata: thumbnail, title, source, and description. The bot.outbrain.com is an ugly appendage forced by Facebook. Then there are the actions you can take on a recommendation. Clicking on the thumbnail will open the article in a webview. Summary will return an auto generated summary1, stash will save it for later, and #{topic} will return more recommendations from the same topic.

You’ll notice that the description in this example (taken from the article page) isn’t great. It’s trimmed, and do little to explain what this story is about. Essentially, it doesn’t help me taking a decision to read or pass on this recommendation.

One of the ideas we came up with is replacing the description with the reason the user see a specific recommendation. We call this feature “Amplify the WHY”. So in the example the image above,  I’m probably seeing this story because I read a lot about science and astronomy. So the “WHY” in this case might read something like “because you’re interested in astronomy”.

It would have been nice to show both description and the “WHY”, but we have limited real-estate to work in, and need to choose one of them.

My team was adamant that we should drop the description and go with the “WHY”. At first, I was reluctant to agree. “I want to see data first”, I said. “Let’s run AB testing”. “Well, we don’t have users yet, so AB testing isn’t relevant at that point. Also, it is clear that ‘amplifying the WHY’ is so much better than showing a crappy description that we should take this as the baseline” was the reply I got. How can you argue with such compelling reasoning…

Now, circling back to my opening, I’m taking my agreement back.

I woke up at 7am today and wanted to read about the results of the debate yesterday night. I didn’t know where I can find this information, quickly and succinctly2. I thought about the CNN chatbot, but CNN’s top stories are posted only at about 9am. Then I figure, let’s try to see if I can find something relevant in our bot.

I typed “hi”, and (to my surprise) the first story I got was right on point –

fb-chatbot-election-debate.PNG

Then I browsed a little more, and suddenly took notice that in any recommendation, which has a relevant title, I skim the description for more context. I also realized that I don’t look for completeness or quality; just few more words that will give better idea what the article is about.

“WHY” I get a recommendation, and why it’s important to me wasn’t relevant in the context I were in – checking the news, the objective news, not that that that’s in my “bubble”.

Summary wasn’t relevant in that usecases either, because much like clicking to read the story, it means “choosing” and focusing on one article, whereas I was still at the decision making stage.

So, what I’ve learned from observing myself (and in that rare instance, I acted as a user, rather than a stakeholder) is that description does have value, and in certain usecased, such as browsing the news, I need objective hooks. Description, in that case, and not personalized reason, were more relevant.

Definitely not representative experience, but one that makes me rethink what should be the baseline. And whatever the baseline is, we should put it to test.

Footnotes:

1

Works pretty neatly. Here’s the summary for this article in the picture: “On Tuesday, thousands of people stampeded into a lecture hall in Guadalajara, Mexico, to hear SpaceX CEO Elon Musk talk about how he wants to colonize Mars. Another question is how — and if — Musk plans to prevent Earth microbes from contaminating Mars, and Mars microbes (if there are any) from contaminating Earth.”

2

I don’t go to sites to look for news anymore, and rarly google for news. And since the extinction of Zite, I now realize, I have no idea where I get my news from…

Google Allo – First Impression

Yesterday I installed the new Google Allo and gave it a first try. My team at Outbrain is responsible building chatbot CMS for publishers. So I was interested to learn about some of the decision made in Allo, and compare them with what we’ve learned over the last 6 month powering the CNN bots on Facebook Messenger and Kik.

User on-boarding

I downloaded the app, installed it, but then deleted it in the middle of the on-boarding. Why? because Google are being overly transparent. Why do they make such a point that they are going to send my contact list to their cloud now and then? there must be some evil reason for that…

Allo-onboarding-1.png

So, I deleted the app. But then I thought to myself, “wait, you’re using Google Contacts, and your contacts are already syncing with google. Not periodically, but all the time, in real-time…” I felt stupid, downloaded the app again and completed the on-boarding. And I won’t say I felt better when the first few prompts from Allo kept pushing on that sharing thing, as if trying to tell me that I’ll be better not use it, if I want keeping private anything

Allo-onboarding-2.PNG

To sum things up, the on-boarding experience could have done more to instill trust and make me more comfortable. Right now I’m not, and although he is a bit more of a privacy snob than I am, Snowden already made a point about the lack of privacy in Allo.

Content experience

  • Typed “top stories” – I got relatively fresh stories, but definitely not important ones.
  • They put the publish time. Seeing that a story published 37 minutes ago give confidence that they deliver news as they happen.
  • The stories carousel is clean and simple, but I would have liked to be able to take action on a specific story. This is possible in Facebook Messenger using the ‘Structured Message’ template. Articles’ recommendations in Allo feel temporary, since you can’t do much to engage with them other than read when you see them. Adding an option to see a summary of an article, save it for later or get more similar stories might give users a better sense of control over the experience and the stories they are seeing.
  • Google seems to think of Allo as a new interface for search, which makes sense for Google, but make Allo feels like a browser. When searching for something, the first quick reply is “Google results”, which once tapped opens the browser and search for your input. I didn’t like that it takes me out of the app.
  • The content in Allo doesn’t feel native. Rather, it feels like a patch, a cut and paste from the browser. Again, makes me feel that Allo is just another browser.

Chat-flow and experience

  • There are no dead ends. Even when chatting with friends, you always have quick replies available. That’s great.
  • There are ‘like’ and ‘dislike’ emoji’s at the last two positions of every set of quick replies. It didn’t make sense to me. As a user, I don’t know what they mean, hence probably won’t use them.

AI

  • That’s the part that surprised me the most. Allo tries to be smart. It tries as much as it can to be non scriptive. Say “hi” and every time it will answer with something different. The first time I typed “hi”, I got the entry point experience, namely the option that I have to interact with the bot. Later, when I wanted to get to the same entry point, I typed “hi” again. This time, though, Allo tried to get into conversation with me. After few more greeting inputs that got me no where, I gave up and typed what I was looking for.
  • At that early stage, when users aren’t educated enough on the conversational design, and are accustomed to more deterministic experiences, trying to be smart is wrong. It’s like the early days of the iPhone – the skeuomorphism design helped users get accustomed to use it, through the icons that imitated physical objects. Once they got educated, more than 8 years latter, the flat design was introduced.

To sum things up, my overall impression is ahh. Yeah, it’s cool to play with Allo and see how well it handles natural language, but it’s no different than google search. In fact, it feels too much like google search, which is bit outdated. But than again, I’m writing this post with Emacs…

Hostile Lead Generation

In the last few days I’ve been getting daily emails from TWC, promoting their “Time Warner Cable Business Class” service. I don’t know anything about this service, and since I don’t run a business, its irrelevant to me. 

Until today I simply deleted those emails, but today I got annoyed, and made an effort to indicate it by unsubscribing from thier mailing list. However, the unsubscribe flow made me think that TWC isn’t really deterred by requests to unsubscribe. In fact, it seems it’s using it as another user acquisition channel.
And I think I cracked the protocol of this funnel:

Marketing emails

Send daily emails to users whose emails we get buy.
img

Keep sending those emails until the user respond, by clicking on the unsubscribe link, or selecting the gmail “report spam & unsubscribe” button.
img

Clean user information through an unsubscribe form

When a user clicks the unsubscribe link in the email, we have a precious opportunity to make sure the information we have about this user is correct.

img

When a user submit the unsubscribe form, we should update our database with the new information.

User is redirected to unsubscribe form

After we get a successful respond from our servers, redirect the user to the TWC homepage.

img

We assume (or hope) that when the user submit the form, she moves focus to another tab, rather than closing the one she’ve submitted the form in. If this assumption holds, then the user will have the TWC homepage waiting for her, and she’ll get to it in the near future.

User visit the TWC homepage

At some point, as we assumed, the user zap through open tabs and open the one that displays the TWC homepage. Great, we have a new lead! The user is visiting our site, meaning she’s interested in our service.

img

Hurry up to drop a cookie on her, and look for whatever information we can get on that cookie. Wait, we have her full name and email address!

Retarget potential leads

Let’s make sure we slice the bread while it’s still fresh, and find that user wherever she browse. This way we can nudge her just a little more, and try to get her to come back to our site and take another step toward conversion.

img

And wait, we have her email! that’s gold…

Well, no bad feelings for TWC. It’s just an amusing example of the absurdity of how user acquisition works.

From an Idea to an MVP

So I have this idea for food recommendation, but now I’m struggling with where to start. What will be a good first usecase for a POC or MVP.

At the moment, I’m planning on a single page dashboard where users can login to, to get a report on their food ordering habits – top dishes and restaurants and more such data.

But I have are two issues with scope:

  1. Not sure users will see the value that I see in such dashboard. Now, sure, that’s what the MVP is for, but:
  2. It’s not testing the real product assumption that users will want to get recommendations for food.

With that in mind, I thought of two other options:

  1. Dishes following – as a user, I can follow the food that I like and see updates, recipes and places I can find them.
  2. Group ordering – offer a simple ‘negotiator’ for food ordering. For example, a group of 4 people who want to order lunch, will connect to this page and put their preferences. The system will match their preferences and come with a suggested restaurant. Users can then save their preferences by registering. It reminds me of doodle.com.

I like the other option both because it requires less technical effort – I can see how I can pull out an MVP without writing a line of code. In addition, this can turn out to be a utility that users will be willing to register to. Lastly, solving for groups orders introduce the possibility for virality and network effect.

Make Food Fun Again

So this idea that I had. I was talking with a company that is doing personalization for jobs – UpScored. I knew Elise, their CEO, from twitter and gave their newly launched product a spin. I came back with some feedback, and after a good chat with Elise, I realized that the problem I’m working on my day-to-day – personalization engine for content, is relevant for other fields as well.

A day later, I was introduced to on of the co-founders of Plated – a meal planning service. He’s story, of how they’ve started the company, reminded me a lot of how and why I’ve started FeedMe – a marketplace for food company that I started about 5 years ago and close about a year later. He was talking about Plated as a food-tech company, while I was struggling what technology had to do with the service, other than having a consumer facing website.

Anyway, at that point I was doing one plus one in my head, and the idea to develop a true technological approach to food discovery popped up. I want to develop an app that will tell me what I want to eat. Let me offer some context, though.

While food related technologies and services proliferate, the simple question of “what to eat?” becomes harder than ever.

Here’s an example: last weekend I went to Austin, for the SXSW festival. I landed on Friday morning, and headed downtown for breakfast. I opened Yelp, to “discover” what I should be eating. But after looking into the first 5 results (out of hundreds), I became hungrier and less patient, so I picked up the first restaurant that I saw across the street, and which seemed decent. Yelp didn’t help me to “discover” the best of Austin.

Another example: Every night (no exaggerations), I have the same dialog with my wife:

Screen_Shot_2016-03-20_at_00.03.51.png

So the problem is that not being able to choose what to eat takes the fun out of the food experience. I want an app, or a service that will take the decision for me, based on my history, my taste, my diet and that’s of my partner for the order.

I spent the previous week researching, brainstorming with friends, wire-framing and what not, and got a long way in defining the problem and focusing the approach for the solution. More on it in following posts.