Today I’m sharing the presentation I used for the speech at 13th Brazilian congress for management, projects and leadership. It was a big pleasure to talk to many people who attended. Looking forward to be part again on next year!
Netflix doesn’t have a CTO (Chief Technical Officer). Having a CTO would be a symptom of centralization of technical decisions. On Netflix they have just a CPO (Chief Product Officer), which is the chief for their products. Products and IT are the same. Netflix also is one of the most innovative teams on technology and value purpose on world. But how did they reach this point? What took them there?
Profiles and responsibilities
They were born this way. It was a mindset simple to keep while they were a startup in 1997. But this thought have been kept during the years even with company’s growth.
With a set of benefits to their employees, which can be translated simply on freedom for them to take the actions they think are needed, Netflix shares their actions on management and responsibility in their speeches. The main concern of a manager is to hire the best people on world. The main concern of all the employees is to take the best possible decisions, having all the company’s context available to rely on. Subscribers numbers, revenues, all the areas budgets are part of the routine information that the managers share with their teams. Managers require their team members to enroll to competitor’s hiring process at least once a year to ensure they are getting the money they deserve. There is no knowledge management also. When an employee leaves the company, their projects die with him. There are no junior, plenum or senior levels.
Netflix avoids rules and processes because they believe that when you tell people how to act, their creativity is restricted and it makes them to stop thinking on how to add more value to the company. It has a cost. It is common for hiring positions to take more than 6 months to be filled, because they require complex hiring process and high levels of subjectivity.
Having some benefits examples like:
- Vacation time undefined;
- Maternity leave time undefined;
- Technical subjects budget unlimited;
- Hardware and software budget unlimited;
- No work plan definition: once you get hired nobody will tell what you have to do. You create your own project, work, finishes, tests, and goes to the next. At this point your manager can help you taking the decision if you want.
This freedom goes through only one restriction: act like Netflix interests.
This way the company, which is proud of saying that hires only the best possible people for their positions, and that has more than 4000 employees, reached a level of ownership hard to compare. Each one is responsible for their projects and has autonomy to define if it is useful to the company or not.
What does it generates?
It generates an incredible freedom environment for people to produce what they consider important. Once all the employees are the best in their areas, it is understood that they will have enough knowledge to take the best decisions. It generates more than 100 projects going on simultaneously from all company’s areas and being tested at every moment. IT projects can’t take more than 2 months to be finished. Each 2h one publication to production environment is made, and nothing is activated before going through A/B testing.
The great objective of ownership also is achieved because all of these conditions also have the intention to delegate power. Every Netflix employee must be capable of taking decisions without depending on endless validation processes or unnecessary opinions. Mixing freedom, responsibility and power is how Netflix reached and keeps his very high level of ownership among their employees and keeps being one of the most innovative companies since the moment it was created.
The model and how to learn with it
Compared to other companies on different acting areas, among the most traditional ones like industries and the most recent startups, Netflix reached his ownership through the joint of some things: almost unrestricted power, almost unrestricted freedom and seniority/maturity as a premisse.
This is a unique model that should not be pursued blindly, but used as inspiration to watch the disruptive way they found to manage their company
This article was written by me, Eduardo Diederichsen and Felipe Lindenmeyer. Me and Eduardo are managers of ilegra’s Software Development area, and Felipe is a senior account manager. We all are connected daily to clients demands.
Daily we do negotiate with a lot of clients. The hardest part is not giving them a price or conducting a good presentation with beautiful slides speaking buzz words. The hardest part is to identify if the potencial clients have a challenge for real and how its challenge can be approached by our company’s potential, language, market vision, and etc. When we find that out, that client will deserve our deep focus to make a good understanding and offer something that fits perfectly to its needs, even if he doesn’t understands it, but then try helping him understand it.
But what makes a experience as perfect as it must be so a client with sign the new contract? Impossible to tell because who buy from anyone is people. People lay on different things to evaluate their experiences everywhere, giving more weight to different points based on their personality and the influences they had during their whole life.
This recent new client had a complete journey from the very beginning contact to signing a contract held by themselves in many companies. At the end he decided to buy from us.
The client scenario
It is a client from financial area, so he knows the financial area customers in Brazil are very demanding regarding the whole UX and the products flexibility. Brazil is the country with most developed User eXperience demand in entire world, so the competition and investments here are huge.
What happened: the first contact happened in a casual party, not related to work. The subject on the crowd turned to work. We explored very briefly few examples of how we are helping some important companies in Brazil to be in front of their competitors. What really happened: The attention was caught and visit cards were given. Few days later they asked a meeting to talk about our portfolio and get to know their requirements and concerns.
What happened: it happened at customer’s facilities. The goal was, as they asked, to talk about few scenarios we’ve been working and the opportunities we identify and foresee as experts at the market. What really happened: they understood we had the knowledge to be their partners and if they count on us they would have oppinions of a specialist in their market. So the next step will be send a proposal and everybody celebrates? Yes, but not so fast.
- FIRST! What happened: they are a very conservative Brazilian firm which still isn’t enrolled to digital transformation practices and doesn’t even want to get to. We are very used to work based on agile methodologies, Lean approaches, testing and discarding losses very quickly. They wanted everything predictable with very clear goals and steps. What really happened: they understood that their competitors don’t work in that way anymore and that working that way they would still be left behind in the competition scenario. Then we reached a mix of practices that would give them part of the control they were asking but giving the project part of the freedom that kind of work needs.
- SECOND! What happened: they told us they wanted to be in front of their competitors knowing how much it would cost and how many time would take. Well, if I knew that, I would be one of the competitors. And that’s where the Lean and experimenting mindset gets the attention. The startsups bothering giant industries don’t have this kind of answers. They won’t have it too. What really happened: we decided to go for a first deliverable (We can call it an MVP) and a further evaluation after that to redesign the plans.
- When things got warm: What happened: we had the steps above but it was taking too long for a decision and things got warm (bad news). Our decision was to invite them to come to our company’s headquarters to see with their own eyes everything we were discussing. They really got impressed with our office because it’s designed to give freedom to people think and innovate. This was a key step because they spoke to people who would actually work with their project. What really happened: the deal got hot again.
What is faced differently by everyone
There are a lot of subjective things in the explanation of the sentences above. I’m not exploring their body language, neither ours. I’m not exploring the sentences we told each other and how did we look during the meetings. So, I’m not telling how we created empathy here. Let’s get to things close to that.
The client scenario
Some companies can be afraid of innovate. It’s a whole new scenario. People fear the unknown. They don’t know what is coming and then they get afraid and just stop. For other companies, the unknown is exciting and they know from there the innovation will come. They will seek for it as a routine.
Since it happened in a social event, the thing was easier. The focus was not evaluation. The empathy came easily because we spoke about our cases in a brief (not moving the whole night focus to work only) and humble way. If we were too thirsty about their needs it could turn into something boring and we could lose that guy’s attention.
The meeting was all about showing our capabilities. With that being flexible to understand their concerns about models and what we were proposing. At that time we invested some time explaining and desmistifiyng software development practices like Dojos, Meetups, team management models, our concerns about quality (A/B Testing, Chaos testing, etc). And at that time a big thing happened: the empathy with one of the guys was so huge, he found so many value, that he started defending some of the approaches to the sponsors in a very enthusiastic way.
We had to use a lot of knowledge to tell them the differences between the way they were approaching the problem and where we see the market moving to. It was hard work to understand how they treat projects and mix it to a reality we could work being sure we would reach the results both of us were hoping regarding the new partnership. But the biggest step taken was the use of the experimentation mindset to go for the first phase. They wanted to give a shot at our suggested model. That’s the chance we had to keep things going well so we would build more trust.
When things got warm
It was the dangerous part. Calling and bothering client’s patience wouldn’t have been the efficient approach. Bringing them to a controlled environment was a good move. Sometimes people don’t get everything you say when you are presenting something. You will have to repeat it in order to get the perfect moment where your explanation will make sense to them and then their attention and interest will be caught.
Being more generalist here with a few more examples
One customer may like to hear to most recent buzz words all pronounced in english. Another client, coming from countryside may not like it because he thinks it’s something for bigger companies. The proposal: one customer may prefer a document with a set of beautiful images in a more abstract way. Another customer may prefer to receive a one page document getting straight to the point using just text. It’s unpredictable.
Good! How can I learn with this scenario?
The CX (Customer eXperience) gets more challenging to achieve when we are talking about B2B or B2B2C offers. When you have a B2C scenario you probably will have one person at the edge who you have to please with your offer and your advantages. It’s easier to ask him: “hey, did you like this new feature?”. Getting back to B2B or B2B2C the variables are countless, since you will have to deal with many people from the very beginning of the negotiation until reaching the final contract signed.
How to attack that efficiently?
Short answer: be interested. Long answer: keep the knowledge with people involved, get experience, and be interested about evolving in the process and the learned lessons. Try to understand things about body language and psychology, do know the one who is buying from you. Does that guy writes publicly? Does he give speeches? What is he speaking/writing/reading/hearing/studying that you can take in count to set up a moment for a fast approach?
Now that we understood the difference about the three approaches we can have over AI with the last article, let’s dive in into the mid-term approach, always keeping in mind the borders. Given the explanation of the differences between Machine Learning approaches: a) ready-to-use APIs, b) training a model, and c) creating a model, let’s talk about training (using) a model.
Training a model
This is the mid-term approach to AI (Machine Learning) problems. Once you found out your problem can’t be solved by any ready-to-use API, try this approach. Just because there is no ready-to-use API, it doesn’t mean nobody ever tried to solve your problem generically and widely speaking. There is a high probability that your problem already can be solved by an existing model. Using this approach you will have to look for three things. It’s not a dependency, since the third step can be left away in some cases (example below), but they are a sequence.
Finding the best model
This is the part where you will have to have someone with good experience in this subject. There are different models to solve the same problem, as an instance. And there are also many problems without models covering it. You will have to find the best model that fits best to your needs. You will have to check the trust percentage that model gives, if it checks all the information you have, if you will have to adapt any information you already have to use that model, and many other things;
We can split the models into three different groups:
Models for supervised training
It happens when you have the information that the algorithm must reach conclusion X (objective) after evaluating A (info 1), B (info 2) and C (info 3) information; Example: you know that sneezing (info 1), high body temperature (info 2) and pain over all the body (info 3) means you have the flu (objective). Here I present some well-known models:
o Linear regression – https://docs.aws.amazon.com/machine-learning/latest/dg/types-of-ml-models.html – It is good to work with number prediction. Examples: What will be the weather for tomorrow? For how much this house will be sold?
o Decision tree – https://www.ibm.com/support/knowledgecenter/en/SS3RA7_15.0.0/com.ibm.spss.modeler.help/nodes_treebuilding.htm – Find out the disease: are the sympthoms A, B and C true? Then disease X; Are the sumptoms A, B and D true? Then disease Y;
o Bayesian network – https://pt.slideshare.net/GiladBarkan/bayesian-belief-networks-for-dummies – When we have an evidence and want to reach its cause. Belief propagation. The same health scenario above can be applied, but in an inverse way. I have the flu. I must identify in this patient we have all the sympthoms, or even without showing sneezing, he still has the flu.
Models for unsupervised training
It happens when you don’t have the conclusion the algorithm must reach. You will have to check it every time it runs. Example: if the customer bought product X and Y, he may be interested in product Z. You won’t know if it is true, because the customer may get interested, but even with that, don’t buy the product. Here I present some well-known models:
o Association – https://en.wikipedia.org/wiki/Association_rule_learning – Same example above of suggesting things to be bought;
o Anomaly detection – Any chart control or information where anomalies have to be alerted. Stock market or the temperature inside a factory’s chamber, as instances;
Models for semi supervised training
When sometimes you know what you will reach. Your problem will set if you will be able to use this model. More models can be found at https://en.wikipedia.org/wiki/Outline_of_machine_learning#Machine_learning_algorithms
Configuring a model
It happens when you have a known model and must configure it. Sometimes you won’t have to train the model.
An example for linear regression, which came from another customer: he wanted to mix many different information from many different sources and reach out how it would affect their product pricing. For each of their products, you configure the algorithm in order to understand that supply A affects 10% of final product pricing, supply B affects 50%, and etc. Knowing that, the algorithm would be able to “predict” changes on their prices and warn them to buy more or less of each supply. This way they would be in front of their competitors, saving money at the right time;
And then… Training a model
Once you have a problem requiring a model to be trained to identify your target, you will have to have data to train your model. The image analysis that cloud providers provide through APIs are great examples. Once you upload an Eiffel Tower image there, the algorithm already knows there is an Eiffel Tower within your image. But how do they do that? They have already trained the model to understand patterns on the image and then classify it. It’s the same thing that Facebook does every time it recognizes faces on your uploaded photos. For the Facebook example, it gets even more impressive because Facebook trains their algorithm with everybody’s faces. Then they know that your image has a photo of your specific friend, and suggest you to tag that guy! It’s not just a generic person recognition like other models do.
How to do that?
At last, there are many tools, such as Google AutoML, Amazon Machine Learning, Watson and TensorFlow (open source tool). The providers solutions allow you to send a given model to cloud and then use their infrastructure to run, train and consume it;
Everybody has been talking about Machine Learning, and everybody wants to get benefits of Artificial Intelligence. It is a new thing IT managers grabbing that old problem from inside the old locker and thinking: “hey! Maybe new Watson can solve it for me!”. But every time I hear someone new asking about how to solve a problem with AI, the problem looks like something never seen before. Every day a new solution is researched to a new problem. If every day a new will comes up, how can we identify what are the borders for AI? Since AI stands for “Artificial Intelligence” what is the “intelligence” border? What can and what cannot be solved with what we have today?
How to identify how hard it will be to find a AI to your scenario
Machine Learning projects can be split into three groups:
Using a ready-to-use open API (this first article focuses here)
- What is it? This is the fastest approach. There are a lot of APIs ready to be accessed and to be added to your solution. There are a lot of benefits, and you just have to pay for that. There is a table and more details below;
- How long will it take? You can get results from some tests within one day;
- Some benefits:
- a) They are ready to use! You just have to plug them to your app. Anyone can do that;
- b) Their suppliers will keep training the model as you go! So, it won’t ever be outdated;
- c) The competition between suppliers will grant you always well trained models and non-stop improvements and updates;
- The items above will be very expensive to reach when outside this approach;
- Some restrictions:
- a) It doesn’t belong to you. It means you can’t change anything on how it works. It’s just you asking: “hey, please classify this image!”. Then the answer would be: “cool! Your image has a woman on it”. But you can’t ask back: “what’s the woman’s hair color?”;
- If the open APIs fit your needs, don’t ask again and start using them by now! Don’t worry about suppliers grabbing your information or whatever like that. Once you pay for the tool, you have a contract where they say they won’t use your data. It’s the same thing of cloud;
- If it doesn’t fit exactly your needs, try to understand how important is to have that 1% more of trust over that AI judgement. If you really need something more precise, jump to “training a model”;
Training a model
- Mid-term approach. There are a lot of different models ready to be added to a project, configured, trained and then used. I will talk about this approach in the next article;
Building a model
- Keep in mind it won’t be an IT project for a while. You will have to have people from physics, mathematics and specialists on your business, and really good information to add to your project in the very beginning. Once they finish the model (it can take up to 2 years. Maybe more), then it will turn into a regular IT project starting from “training a model”. Both of this projects (building and training) will take for sure more than 2 years of research and testing. If it is a key process to your company, don’t waste one more second and start this project. The sooner you start, the sooner you will get the benefits;
Cool! What are the ready-to-use APIs?
My suggestions are inside this table below. But the options are not limited to it. You can find many others. They are maintained by the biggest cloud and AI players. It means you can trust it, and probably for what they focus, they are the best you will find.
|Chatbot related||DialogFlow||Watson Assistant and Virtual Agent||Bot||Lex|
|Video Analysis||Video Intelligence||Intelligent Video Analytics||Video Indexer||Rekognition|
|Image Analysis||Vision||Visual Recognition||Computer Vision API||Rekognition|
|Speech to Text||Speech||Speech to Text||Bing Speech||Transcribe|
|Text to Speech||Text to Speech||Bing Speech||Text to Speech|
|Natural Language Classifier||Natural Language||Natural Language Classifier, Natural Language Understanding, Personality Insights and Tone Analyzer||Language Understanding||Comprehend|
|Trends search and analysis||Trends||Discovery and IBM’s Discovery News|
|Find patterns over unstructured text||Knowledge Studio|
|Content moderator*||Anomaly Detection|
|Jobs discovery||Job discovery**|
* Google, IBM and Amazon have content moderator built-in their products. Microsoft has this specific product looking for anomalies only.
** Job Discovery is a private tool available for only few partners.
Two examples to talk about the borders again
Just like the example above: a customer came to me asking about a solution to identify people on images. Great! Let’s use Google’s Vision! Vision identifies people on photos and gives a lot more information about the colors on that image, about places that image may contain, and etc. But then the customer asked me: I want to recognize if it is a woman. I said ok! And then: I want to recognize the woman’s hair color. Ok, all open APIs are off the game. Let’s find a model, train it and then get hair colors. For you to be able to answer those questions there is no shortcut. You will have to read the documentation of each open API you find and run tests on it.
Language defect recognition
Another customer came to me asking if they could give a microphone to their employees in order they could operate a system just giving voice commands. Ok, it’s not new. We could use a mix of speech-to-text and natural language processing APIs, let’s move ahead! But then the customer said the system should recognize internal terms like acronyms and words they invented to communicate with each other. Erm… it’s not possible. You can’t train ready-to-use APIs to understand your very own specific terms. The easiest way there was to suggest the operators to change the words for some others the system would recognize. Otherwise they would have to grab models, configure and train them to understand the new words.
Then, why don’t you give your first AI step over ready-to-use APIs evaluation? The sooner you start, the sooner you will understand how to approach that old problem.
I heard “cloud is the new black” during a training session inside a Google’s office. You can guess it means that cloud is basic. But why did they say it? Google didn’t forge this beautiful sentence. Gartner did.
What Gartner means by “cloud is the new black”? In short terms, Gartner said the cloud Market is a US$ 1 trillion Market. Right now it’s just US$ 56 billion.
Google repeats that for comercial purposes of course. The reasons are the same Microsoft, AWS and any other cloud provider do: scalability, stability, abstraction of infrastructure processes, and the same bla bla bla. And I totally agree with their reasons. But without considering this technical questions, to the final business results $$$, why is the cloud so basic?
How does that come to our business?
Yes, there are already a lot of companies moving to cloud and starting their applications inside the cloud. But I still can easily find many companies still not even considering the cloud move. Inside my reality it’s hard to understand. How can they not see cloud benefits? How can they still use their machines and spend millions of dollars buying more and more storage every 6 months? For me it’s waste of time. I’ll explain why.
The rooms where the machines are hosted. They cost Money. For few companies, I’ve already seen very expensive entire blocks inside noble areas like São Paulo being used to host… machines. They don’t need the datacenter to be that close to the offices. The latency doesn’t matter that much. I’m 100% sure. For sure if they remove everything there, and rent the space, the renting revenues will pay for a big slice of cloud cost monthly. What if they sell it? It would mean investments for areas that are needing that Money to innovate and be in front of their competitors. Because of that lack of money, their areas are wasting time. It’s waste of time.
Recently a datacenter, close to the company where I work had a fire issue. Many governmental and private companies, core and non-core systems, were affected. And where were the backups? Inside the same building. Because of the fire, the fireman and police didn’t allow the technical team to get there and move the information to another datacenter. The replication wasn’t automatized. It caused more than 12h of unavailable systems. Can you imagine any company inside any industry without receiving transactions during 12h? Imagine the financial area. Hard. Now imagine a factory without systems for 12h. They won’t sell for a whole day? You could answer “Yeah, but we can ‘take notes’”. The employees don’t remember how to hold a pen this far. They also won’t know how much they produced of what they produce. They won’t know how much they spent producing things. But the main thing here is the overhead that will be created inside those companies to put everything back to systems. Talking about Brazil they can even get a ticket from government because of some missing transactions on that day. What all that means? Waste of time.
Why not cloud?
I have a client who runs a retail solution on cloud. The solution has been running for the last 2 years without downtime. Is it a core solution to their business? No, it’s not. But the fact that they don’t have headaches with that small system saves them time to think on other things. Saves tickets on traditional infrastructure teams. Saves their mental health also. Of course the cloud itself is not the only answer for the application stability. They do care about quality on their development process. Then all these benefits come easily.
When will companies decide to migrate to cloud?
It all makes me think that using cloud is related to maturity. Now a quick link with internet-related startups: companies who grow unbelievable percentages every year in the entire world. The biggest part of them has the cloud in common. Most of their business models wouldn’t be possible without the cloud.
The traditional companies, which already felt startups “bothering” their market shares, are moving, or already moved to cloud.
Why does that happen? Because the cloud gives them the speed they need. Things I’ve already seen in on-premise VS cloud environments:
- A new environment to create a new app can take up to 1 month to be released by the infrastructure team to the development team to start work. It’s one month less on that project. Within cloud it’s solved in less than an hour.
- Analytics information being generated only with the data considering the day before the current. In cloud you can have live information to take your decisions.
- Analyzing petabytes of data without having to do that on the weekend, when there’s no concurrency with other systems running. In cloud you can do that whenever you want without buying millions of dollars of infrastructure in advance.
All these examples want to say the same thing: when the companies start feeling they are being left behind because they are slower than their competitors (either it is a startup or not), they will change.
So, why cloud is the new black?
So… Getting back, why cloud is the new black? Because it means saving time. Because if this text made you remember of any issue you are having, or you may have inside your company, it means you will run after it to solve. It won’t be a short run to find all the responsible for everything and asking them to change to your new conceptions. It will take weeks. Months at least. Those weeks or months spent by the team looking to fix or prevent something to happen, means weeks or months not looking to improve the business, not looking to be in front of competitors. The IT area is not the support anymore. It can’t JUST be prepared to whatever the other areas will demand. It HAS to be the one of the leading business areas. And why that is true? The IT guys know what technology can do. The other areas don’t.
A lot was discussed, and still have been, about cloud journey. The worldwide big companies already have their strategy turned to cloud ever since the applications start being planned. Potentials of security, scalability, autonomy and many other related subjects are not approached on time of project discussion anymore. The cloud doesn’t generate doubts or lack of trust anymore. Once going to cloud is an old subject, beside the attention points already brought, I discuss here some errors I’ve already seen, as a way to support and contribute to new moves:
Face cloud as a side-need
The cloud is the main actor on applications. It’s very common to see companies starting their journey with simple backup or storage routines on cloud. This way they can start dismissing their own datacenters. It’s an important step, many times needed to show safety to a skeptical to changes board. But rollback, backup, disaster recovery, and many other routines, which used to be hard to implement to OPS teams, are now trivial to big cloud players. Using cloud having only this objective is to underestimate cloud’s potential.
Self-managed services have a very high level of automation on the points shown above. Get the benefits from processes automation, and don’t waste time and money researching, planning and implementing. The not self-managed services also have a lot of automation built-in. Backup, rollback and DR are now the basics.
Decision matrix negligence
The option for one or another cloud provider is something very important and for many times underestimated. Selecting a cloud provider is like a marriage and can bring troubled relations! I’ve seen companies trying to run away from its 10 or 20 years traumatic contracts with giants like Oracle or IBM, neglecting cloud decision matrix. They ended up signing new contracts leading to new 10 or 20 years contracts with provider X or Y.
Big players like Google, SAP, IBM and SAP allow a very high level of customization. Keep it in your mind during the decision. Those will allow to configure everything needed to a big application. They cover dependencies, integrations and relationships with other systems, specific engineering practices and etc. It’s also very common that complex applications can aggregate benefits from multi-cloud operations. As an instance: infrastructures that has lower prices on provider X can get benefits from FaaS Bigdata services from provider Y, which is faster treating large data volumes. This way one single application can be distributed over more than one cloud provider and also over more than one geographic region. It allows business objectives like cost saving, delay reduction and access to specific features from niche players.
Beside the big players, there are many niche providers of PaaS and FaaS. Digital Ocean, Heroku, Umbler and Elastichosts as instances, are useful for not so robust applications. Those platforms have high abstraction levels, and it cuts the development/operations team learning path.
Low knowledge of the potential of the chosen cloud
The savings that can be reached using cloud resources, like rollback, DR, backup (mentioned above), monitoring, alerts and automated actions, are very high. Consider it when executing your migration project!
Evaluate PaaS services to run applications or FaaS for network tasks, Machine Learning, deploy, tests and other. As an example, I’ve seen that every company that had software running over on premise infrastructure, ended up at sometime developing something to improve its deploy and testing tasks. It means that, if 10 companies developed similar software, costing 500h each, they would have spent 5000h creating almost equal software. Having services like CodeDeploy, Device Farm, Fastlane and Firebase, this kind of development is not needed anymore. Fast access to these services saves many development hours. It gives more speed to companies and it means they will be able to answer faster to their customer needs.
Low tracking on investment
Do not treat cloud budget as a black box. Track the investment via capable people and separate it by application/business/whatever making sense on that reality. This way true decisions will be taken, contracting or refusing some evaluated service. This is one more practice to prevent the “marriage” with a cloud provider get similar to those contracts of many years with Oracle, IBM and etc, which all big company try to run away nowadays. We have already made mistakes in the past. Let’s learn with them!
As an example, it’s still common for companies to invest amounts of money buying different mobile devices to test apps. Lack of knowledge about services that allow to automatize testing on those devices make that money to keep being wasted and the benefits of automation not reaching the development process.
With more and more niche competitors growing, the lack of updates to the responsible team can be a big problem. Not knowing good alternatives to current services forbids you to discover and use new benefits.
The more different views you have about the same subject, the more information you will have to take your conclusions and make decisions if needed. This first article about IT evolution had a sight from developers and a very superficial business view. This new article aims to look from the perspective of clients buying software and how they are different even inside the same industries.
How software used to be bought
In past years, companies used to buy software like any other commodity: I want 5 cars with 4 wheels each, manual transmission and white painted. They didn’t mind on how their software is produced, or any kind of good practices to apply on the process. Yes, inside the software requirement documents were sections regarding security, protocols and things like that, but at the end of everything, the lowest price would win.
Old (not too much) decision matrix
The decision matrix (example below using a car buying process) were very simple when selecting the criteria. They were also superficial. The high level criteria were easily like: security levels, performance, proposed schedule, price, scope coverage, success cases, knowledge on selected technology, etc. With this decision matrix, the weights are given according to each project. But the price and schedule, after all the criteria evaluation would still determine who wins.
On the example above: what exactly is comfort and styling for this buyer? Safety can be evaluated using governamental public data. This is a parallel to show how none of the criteria is detailed.
Anyone saying that could cover the entire scope, under the asked schedule having a fixed amount of money could win the fight for the projects. If they didn’t have knowledge enough, it wouldn’t make huge difference. After everything, software was treated like any other product. It doesn’t matter how it’s made. It just matter that it works.
Recent decision matrix
Leaving behind government industry, and few others not affected even a bit by digital transformation, the reality has been changing.
What I’ve seen lately is an increasing spend of time on detailing the criteria. For each of the criteria, the buyers want to know what the supplier have already done, how they plan to conduct the solution and also want to evaluate alternatives for everything.
- Security: ensure the information exchange will occur under a controlled environment? Tactics for code obfuscation. Platform natural security concerns. Cloud provider certifications, etc;
- Schedule: from “a detailed schedule” (which is never achieved) to “a macro view of time and a mix of agile methodologies to be followed”;
- Success cases: show where/when similar solutions were already created;
- UX: from “the system has to have good usability” (completely subjective) to “it must follow UX guidelines from Google and be conducted by a specific professional, not by developers”;
- Performance: from “the system has to load under 10 seconds” to “each endpoint must answer under 2 seconds”;
- Scope: from “do everything desired” to “let’s do the most we can inside the schedule”;
The companies want to know how their project will be conducted in details, and they also want to put their opinion on it. Since the digital transformation is making companies to turn their core into IT, the IT is not a support anymore. So now they are able to suggest things and to talk at the same level of the consultancies.
With this comparision and evolution scene, the MVP mindset is very clear. Also the goals to achieve faster time-to-market and faster revenues.
Different requests inside the same industry
The report above is true for 100% of companies on the most mature industries in Brazil. But for the rest of industries, using retail as an example, it’s not:
There are stores who are already evolving their ecommerces for security, performance, scalability, stability and the most important: user experience. But there’s also those who treat their online stores as something needed. Is common to copy layout practices from concurrents. Also, security is expensive. Then some companies have a budget to lose due to hacker invasions, instead of investing to security.
The good news is that the market is evolving faster and faster. Soon no industry will be behind.
Having the experience of managing many projects in many different methodologies and different mixes of them, recently I could notice some attributes that happen in all of them to make it to be a suicide project to manage. By suicide I mean a project that will face issues and go down until it reach an unacceptable status, leading to a client loss, or something extreme like that.
The items below were identified as something common to all of managing styles, and if any of them is common to you or part of your routine, it’s time to change something. Some of them regard people relationship and some of them technical parts.
No QA environment
A project which has only a formal Production environment. The development is made by coders on their own machines. There is no QA (Quality Assurance) environment.
It looks crazy, right? But it still happens. Usually it happens when who is paying the bill doesn’t understand how catastrophic not having a Quality Assurance / Testing / Homologation can be. When it happens you are dealing with your luck day-by-day like the image above.
The QA environment must be, ideally, an exact copy of the production environment. Routines of copy from PRD (production) to QA must be maintained from time to time. When it happens, whenever a problem on PRD comes, it will be faster to track and simulate on QA. If it’s faster to simulate, it’s faster to find a solution and fix PRD. When production is fixed fast, the client loses less money related to that failure, or its brand is less affected due to that stop.
There can be different ways or minimum-requirements to at least have a trustworthy test before sending new code to PRD. The QA data can be different of PRD current. The environments may have different hardware configurations and etc. But the more differences QA and PRD have, the more time will be spent to track down issues.
Talking about a different service, when you don’t have a QA environment, holding proportions, it’s the same thing as a doctor trying a new surgery method without testing it on mice or monkeys before. It can work. But there is a huge chance of going wrong. And when it works, its nothing but luck.
The more automation you have, the less human errors will happen. People will fail. And it’s normal! Let’s let people to think on what really matters and not on routine tasks. Somehow it will happen in someday:
- They will commit the last code to the wrong branch. Then production will have a break;
- They will forget to run a script before deploying a package. Then production will have a break;
- They will forget to run minimum tests before running a new process or service. Then production will have a break;
A good way to prevent those errors to occur is to automatize as many things as you can.
A good starting point may be automatizing the deployment activity. Many companies I’ve already seen still have issues with deploy. They plan 4 weeks in advance for a deploy. The deploy is done. It crashes. It requires one or two weeks to fix the code and get production running back again. Then it plans with 2 months in advance. It crashes again. If the deploy is a problem, why not turning it into something trivial? Try to make it everyday. A lot of things will be learned.
Automatizing tests is also a great initiative. Automatized tests is one of the best weapons for software stability. The more coverage of tests your app has, the more stable it will be. These two suggestions will require time to be implemented, but their benefits will be felt on software credibility trough the whole company.
Talking about a different service, not having automation is like loading a warehouse box by box, without a fork-lift. A lot of time will be wasted of course. Also some of them will be dropped. Some of them will be shaken too much. It will work, but a lot of risks are added to the project without a need.
The one responsible for testing your project is the main actor of your routine. They will be whose opinion will be asked about on going practices. This person must be committed and addicted to your project, helping and pointing fingers to anything that goes wrong. The best projects I’ve ever been enrolled had very picky people testing and giving their OK to delivered features.
If your project lacks this concern, many problems may come up:
- Lack of perception of value. What you have been practicing/telling is not what the customer likes to hear/watch;
- Lack of alignment of what is developed versus what really adds value to the business;
- The worst one: Lack of knowledge about the system. This person should be the one who knows all the system. If they don’t, many problems may come up because of different conceptions of why something was developed in some way;
This example is like a mom who doesn’t care about its baby education. It will grow, of course. But it can turn into a time bomb.
Low judgement at technologies choices
Technology is something serious and can’t be treated as a fashion trend. Fashion trends come and go and are renewed from time to time. A programming language, framework, feature, cloud provider neither anything related to development can be elected because its fancy or is the thing who everybody is talking about right now. All of this decisions must be questioned by the development team and, if possible, count on help from someone outside the project to think together with the team and reach the best choice.
When wrong technology choices are made, it may cause some problems like the following:
- Difficulties to find people who know that technology to work with you in your project;
- Problems in the future due to incompatibility of that technology with the project requirements;
Using the parallel to other services, using a new technology which is not proved yet is like buying a hamburguer with vegan-bacon inside it. It can be good, but is not proved yet. It will be your bet.
How is structured your routine of communication with your client? Do you have formal moments to share information? Or do you make it when a trigger come? At this point there is no right answer, like suggested on the article “The magical answer for software development project management“.
But you will have a process to share information and to tell what you are doing to whoever is paying you. It can sound strange, but sharing this information may be a serious problem in many projects. You must ensure the activities you are conducting make sense to the client, and they understand that what you are doing is good to the business. Will you have a weekly meeting? A daily? A status report?
When you don’t have a process to give feedback on what you are doing, is like investing money with an investment company that doesn’t inform you about your gains.
Why to attack all of this?
All of the statements above refer to practices that must be figured in your project. Otherwise they will mean problems on the relationship or on the app data. Whenever any of them happen, they mean losing money or at least weakening the system owner’s brand. It may be natural that the main apps of a bank won’t have large issues. But the user doesn’t like when the registration to that cashback program crashes. It means the bank is not serious about their user and marketing advertising.