Interview Transcript (Anonymized)

Giuseppe Silletti: Just to get started, maybe if you can give a very short intro about your experience - where you’re working now or your responsibilities, or if you’re not working, just to understand where you’re coming from.

Senior Engineer: Actually right now I’m mostly looking for a job, like many of us in these times. I’m working as a contractor, but very sporadically, collaborating with a British company which offers online training for engineers, delivering TDD as a TDD coach and also things related with domain-driven design as an internal advisor for a new training they want to deliver.

That’s my present, but mostly looking for a job. In my last two jobs, my last one I was a product engineer, a little bit of team lead, and before that I was a senior engineering manager - very hands-on. I was working in ensemble programming actually most of the time for a platform team. It was an internal product, which is also interesting.

Giuseppe Silletti: I’m interested in understanding the challenges that engineers face when it comes to product decisions. I’ll share three possible topics - if you can pick one that resonates with you and share one real story about it:

  1. Times when you spotted a problem that a product team missed
  2. Times when product asked for something that felt wrong from a technical or user perspective
  3. Times when you saw a product direction you wanted to influence but didn’t know how to approach it

Senior Engineer: The first one, yeah. I have a couple of stories. Let me pick the best one for this case.

I can tell you a story - if at any point you think it doesn’t fit, stop me and I can jump to another one.

When I was at a data analytics company, that was two jobs before, where I was working as a senior engineering manager, hands-on, there was a huge problem in the data release process in the company. The data release process was extremely slow, human error prone, and it required many people, many teams doing many things in a way which was not really clear - a lot of effort.

There was a goal in the company that we wanted to deliver data to our users faster. We decided to jump into this to try to see how we could help with that problem, because there had been many initiatives in the past and they all had failed.

From our perspective, the reason why they had failed was two problems:

One, they didn’t really take into account the users’ problems, the users’ needs. They were designing, architecting and doing things because they thought it was a good idea, but there had not been too much conversations with the real users.

And the second problem - they tried to jump in and go to the final complex solution at once, you know, the classic. Instead of going in baby steps from more simple things, they tried big designs, big implementations. They always failed.

In our approach, what we decided to do was going in very small baby steps. The first thing we did was talking a lot, listening a lot, asking a lot.

We had to talk with nine different teams to really understand the whole process. One of the issues was that for every data release, there was a data release manager, always the same person, who had to orchestrate eight different teams. There was an order of execution of some things. Each of those teams had to execute several things with some dependencies - upstream, downstream - but everything was humanly orchestrated. If something failed, someone had to tell someone that it had failed. It was a mess.

The first thing we did was really understanding what nobody had - the big picture. Nobody had a real end-to-end understanding of what was needed to make a data release. Nobody.

So the first thing we did was doing that - talking with many people, with simple diagrams. You don’t need a big thing. We were diagramming in real time to understand where everything fit - upstream dependencies, downstream dependencies, who you need to notify, if this fails what should be done before, afterwards, all that.

After that, what we decided was wrapping the whole process, first of all, in a manual way. We were using Argo workflows. The first thing we did after having that understanding was to model the whole process in a workflow which initially required human action. But at least the steps and the dependencies and the notifications were in place.

We did that in Slack. It was very easy - the data release manager could start a data release from Slack, and that triggered this workflow, which was visualized in Slack. All the steps required showed as pending or done or failed. Each team received automated messages about “okay, now you have to do this.” Once they did it, they just notified in Slack, and that triggered the next steps, either if it was a failure or success.

Initially it was human because most steps were human. But that was already valuable because there was a single place where everybody could see what was the real status of the data release. That was already valuable. And also the automated notification - that was valuable, having faster feedback about the process.

The second step was starting to automate things. Some steps were already automated. We could trigger in the workflow - mostly it was Airflow DAGs - executions in Airflow. We started replacing the human steps with automated executions. We had visibility to see if it was a success or a failure.

Some automation was already somewhere, some of it was easy to do. In other cases, we had to either ask the team to automate it, or we collaborated with the team and did it together, or we did it. It depended on the case.

The data release manager was amazed with all this. He always told us - I repeated a lot this baby steps concept, which was brand new for them. We had some public praise on LinkedIn because he was amazed how we succeeded to deliver value in baby steps. It was the first time that succeeded.

In the end, this whole process was just a click. Which was really great. And the basis of all that was really understanding the needs, really understanding the big picture, and going from more simple things to more complex ones.

Giuseppe Silletti: Who else was involved in this project?

Senior Engineer: My whole team. We did it as a team.

Giuseppe Silletti: Mainly engineers? Only engineers? Or was there a product manager?

Senior Engineer: In this case, I was acting as kind of product manager as well. Me and my manager, we both were acting kind of product managers. The rest of the team - I consider they were all product engineers, mostly in the backend.

The data release manager was very actively involved. And the eight teams we were involved in the data release, which required many modules to be executed.

What we tried to do - I like transparency and open conversations - we usually either asked in the Slack team whatever we needed. We tried not to go for specific people, just the team. When we had conversations, interviews, whatever, we tried to invite - there were always at least two people, two or three people. I didn’t want to talk only with the team lead.

There were surprises always. Lots of assumptions, you know, about how things work. There were lots of assumptions in general in this data release. Even inside the same team, there were people with different views about how something works.

That’s why I wanted all the conversations in the open because that was the way that someone could spot something and say “oh no, no, no, this doesn’t work like that.”

Always trying to invite at least two people from the team. And we had lots of conversations, not only one conversation with each team. Lots of iterations because we were learning in baby steps. It was so complex that we couldn’t try to understand everything at once.

Each time we were able to understand something else. Also because of the levels of abstraction. Initially it was just the workflow with human steps, and when we tried to automate it, that required a deeper understanding.

Giuseppe Silletti: Can you tell me about a time when you’ve been working on a feature you owned end-to-end - from understanding the problem and defining what needs to be done to shipping the solution?

Senior Engineer: The one I told you could be an example, because it was a platform as a product. We were developing with Python and Flask - it was software development. But I can talk about another example.

We were in a company - a marketplace, 10 years old, in good shape. They had lots of problems to deliver value after 10 years. I was working in a small company where we helped companies to better develop products based on software, end-to-end, everything. We were hired to help them.

The first thing we did was selecting where to start. It was a huge application. We wanted to help them modernize their architecture and also their ways of work - both things.

We decided to start with a critical part in that application, which was a multi-step form. The marketplace was for people to look for a professional to do some work at their homes - painting or plumbing or making changes to the house, whatever, anything related with a house. There were lots of professionals there and people could search for a professional.

We decided to start with the multi-step service request, which was the key part because it was the way people described what they wanted - type of work, where they were located, urgency, characteristics of the home. Lots of questions.

It was not working very well, so we decided to start there. What we did was, in baby steps, replacing that multi-step form. It was to improve the funnel. We did lots of A/B testing - front-end and back-end, everything.

Besides modernizing everything, we were not assuming - we didn’t just copy the old thing into the new thing. We started from scratch, really understanding what was the user needs, the user behaviors, what was working, what not.

There were lots of things that they had not challenged themselves about. They assumed that some steps were really needed. Because we had new eyes, I guess, we said “why?” There’s a lot of friction here, and it doesn’t really help either for the business to really better understand the user need.

All those kinds of conversations and trials. We had a product manager in the company that we talked with. We also had access to some recorded interviews - behavior interviews of users using it. We had a couple of people that we could directly show different ways of doing it - not show, but they were actually doing it to see how they felt.

We were measuring a lot with Amplitude, whatever - measuring all the time. We had great results, we improved a lot the funnel. And it was much easier to evolve also from a technical perspective.

Giuseppe Silletti: Aside from the technical challenges, what do you think was the biggest challenge in collaborating towards this solution?

Senior Engineer: In this specific example, one thing that was challenging - related to the assumptions I mentioned - we talked with people from sales, from marketing, people from other departments which were involved in this service request multi-step form because it was quite critical.

Something challenging was to challenge things itself. There were lots of assumptions, so there was fear - a lot of fear to change things. Fear to change things and suddenly having much worse results.

We were trying to explain that this was about having faster experiments with low risk in the sense that we can always roll back, it’s no problem. We were doing everything with feature flags, A/B testing.

We were trying to make them understand - to introduce the scientific method of hypothesis. That was sometimes challenging because some people had strong opinions: “Oh no, it’s super important that users…” “Okay, but are you sure? Why?”

We were trying to take all that always to data, to data within experiments. Then we could show: “Okay, see, after this time, now we have good enough data to say this is clearly working much better than this other option.”

← Back to interviews