Blogplanet

Usually I don’t write many promotions for other’s contents on this blog as I try to keep it personal and focused on my personal views. Recently I was contacted on the International 2016 State of Testing report, and whether I would like to do a write-up about it. I asked whether it would be ok to post a personal view, so here it is.

Demographics – and what do they tell me?

The top areas from the report are Europe (& Russia), USA, and India. I think these are also the biggest areas when it comes to software testing. The demographics tell me that the data according to my impressions is not very biased but well-spread.

About a third of the respondents work across four different locations. Another third work in a single location. My personal view on this is that there is a good mix of testers working in on location, and way more spread across different locations. I think this might stem from different out-sourcing companies as well as companies working across different sites for various reasons – even though this usually makes the formation of real teams hard – at least in my experience.

Most of the respondents have working experience of five years or more. I think the majority of testers new in the field usually don’t get immediately their attention on such kind of surveys. I think this is tragic, as in the long run we should be working on integrating people new to the field more easily.

There also appear many test managers in the survey data. This seems quite unusual to me, as there certainly are way more testers than test managers – I hope. This usually raises the question to me how come there are so few testers passionate about their craft. In some way this is tragic, but it resembles the state of the industry.

Interestingly on time management, most time of the testers seems to be spent on documentation (51%) and dealing with environments (49%). That’s sort of weird, but also resembles my experiences with more and more open source tools, and more and more programmers not really caring how their stuff can be tested or even brought to production. On the other hand I notice many problems with test data-centric automation approaches, where handling test data appears to be the biggest worry in many organization. I usually attribute that to bad automation, as an automated tests usually should be easy to deal with, and create its own test data set that it operates on – a problem well-addressed in the xUnit Test Patterns book in my opinion – but few people appear to know about that book.

Skills – and what you should look out for?

Which sort of transitions my picture to the skills section. Testers appear to use a couple of approaches, foremost Exploratory Testing with 87%. There are also 60% mentioning they use scripted testing. This also matches my experience since testing rarely is purely Exploratory or purely scripted. I think the majority of testers claiming they use Exploratory Testing is either a signal of the rise of context-driven testing in general, or a bias in the data. I think it’s more of the former.

I liked that test documentation gets leaner. With the former 51% of the spare time of testers spent with documentation, this is certainly a good thing. At the conferences I attend I see more and more sessions on how to use mindmaps for that. Quite a third of the respondents said they already used mindmaps. I think that’s a good signal.

Even though the authors claim that formal training is on the raise when it comes to skills of testers, and their respective education, there are still many testers trained through training on the job and mentoring, as well as learning from books and online resources. I think this is a good trend, since I doubt that formal training will be able to keep up with transferring skills in the long run. They can inspire testers to dive deeper into certain topics, but on-the-job training and mentoring, as well as active reflection from material that you read, is a good thing, and way more powerful.

Unsurprisingly communication skills are the number one necessary skills for testers (78%). The next skillset that a tester needs according to the survey is on functional testing and automation, web technologies, and general testing methodologies. That resembles sort of my past as a tester, and which skills I put efforts into. Unsurprisingly 86% of the respondents claimed that they have test automation in place.

More Agile – less concerned

It seems that waterfall approaches are on the decline, even in the testing world. In 2015 42% mentioned they used Waterfall. In 2016 it were only 39%. 82% responded they used Agile – maybe every once in a while.

Even though the testing community usually is concerned from the historic background on their job safety, this uprise of Agile methodologies didn’t lead to more testers being concerned. Compared to 2015 where 42% were not concerned about their job, in 2016 there are 53% of the folks unconcerned. Probably that might be related to more context-driven approaches being more wide-spread.

This is just a summary with certain picks from myself. I encourage to dive into the State of Testing survey report on your own to get more details.

PrintDiggStumbleUpondel.icio.usFacebookTwitterLinkedInGoogle Bookmarks

“Think. Create. Learn”: das war das Motto das Otto InnoDays 2016. Wie stellt sich die ganze Veranstaltung rückblickend vor dem Hintergrund dieses Mottos dar?

Motto: Think. Create. Learn.

“Think. Create. Learn.”: das war das Motto der InnoDays. Think und Create habe ich überall gesehen und gespürt. Das Lernen waren meiner Meinung nach etwas unterbelichtet. Sicherlich wurde viel auf technischer Ebene gelernt, auch bzgl. Machbarkeit (z.B. 3D-Druck von Gutscheinen). Und die Teams haben auch quantitativ etwas über den Markt gelernt. So hatten viele Teams Zahlen, Daten, Fakten über den Markt und bestimmte Nutzergruppen in ihrer Abschlusspräsentation. Mir fehlte noch das qualitative Lernen über den Markt – welche Probleme und Bedürfnisse haben real existierende Menschen (und nicht irgendein Durchschnitt) wirklich?

Fazit

Für mein Gesamt-Fazit zu den Otto InnoDays wiederhole ich das, was ich bereits ganz am Anfang geschrieben habe:

Im Vergleich mit ähnlichen Veranstaltungen in anderen Firmen waren die Otto InnoDays 2016 auf jeden Fall ganz vorne mit dabei. Und wenn ich dann bedenke, dass Otto ein Konzern mit über 4.000 Mitarbeitern und kein eBusiness-Unternehmen mit nur 200 Mitarbeitern ist, ziehe ich meinen Hut. Von den Otto InnoDays 2016 können sich viele vermeintlich agilere Unternehmen eine große Scheibe abschneiden.

Ich habe an einigen Stellen deutlich etwas zu mäkeln gehabt (Einbindung von Endkunden, Vernetzung von Ideen), aber das ist im Vergleich zu anderen Unternehmen Jammern auf hohem Niveau. Ich hoffe, dass meine Mäkeleien dazu führen, dass die Otto InnoDays 2017 noch cooler werden als die Otto InnoDays 2016. Und dann lasse ich mich gerne wieder dazu einladen.

Mehr zu den Otto InnoDays 2016

Die weiteren Blogposts in dieser Blogpost-Serie finden sich hier.

After a long time of agility and philosophy, here's something technical for a change.

Previously...

Yesterday, I was talking OO and functional programming with Alex Chythlook, who told me that some problems I faced in Anathema could have been solved much easier applying functional programming idioms.
When I asked for details, he pointed me to a talk Jessica Kerr (@jessitron) held at GOTO 2014.
Go here in case you're watching the talk and would like to actually see the slides.

tl;dr:

All of her points are well made, and she's an entertaining presenter. I am happy that she pointed out some things that I didn't have names for previously.
However, I am surprised that many of these ideas are functional, as they just appear to be thorough applications of OO principles to me. If they have been functional all the time, then it is certainly good to call them thus.

What stood out

"Errors are data too" Jessica said and told us to "not interrupt the execution flow". 
I've long spoken out against checked exceptions, and this nails it: If I treat an error as data, I am forced to deal with it. Right now, just where I would process the good result. No handing the exception outward, no wrap-to-catch-later.
That way, Exceptions are limited to things that I, as a programmer, have not foreseen, and that's the way they are meant to be used.

Idempotence, something she touches on in the very end, is quite important, too. Good to have a word, now. It describes the idea that an executable block of code (so, a function or a Single Abstract Method object) should change the world only once, even if it's called multiple times.
From her talk, I picked up the notion that this should be true for every function, and I don't fully agree with that because complexity increases when I build things out of things that build things, even if those are just functions.

The third main takeaway was her call for Lazy Evaluation. Doing that forces you to think in Objects or First Order Functions, and prevents procedural style – and that should always be a good thing.

What appears to be OO

I first stumbled when Jessica introduced a service object to limit a function's access to the database. Isn't that an application of the interface segregation principle (ISP) and the Single Responsibility Principle (SRP)? 
Smalltalk people always told me that even in Java, the client should define the exposed interface, and this is just that: If I need a service that just inserts, the method should have access to only that.

Next, she speaks about specific typing - again, isn't that just OO? If something represents a name, I should call it "Name", not "String". The DDD people have said it since 2004 and hardly anyone listened, so Jeff Bay had to point it out again in his paper on Object Calisthenics some seven years ago, calling on us to "wrap all primitives and Strings" (and numeric objects, of course).
Applying this principle in classes and productive code, I can well say that it is a game changer - your code becomes much more clear and expressive this way, even more so when you apply his next rule – "use first order collections" - as well.
Back to Jessica: Good on her picking up on that, I fully agree. It surprised me, though, that she watered down the principle by us to Options and Tuples later on in her talk. Those two are never domain specific, so effectively, she just put two new primitive types on our plate. 
More recent languages have more elegant ways of expressing these concepts, and they might feel more at home there.
I was surprised once more when she presented the lack of concern about when or how my code is executed as a principle of functional programming. With the OO concept of encapsulation comes the idea of Single Abstract Method objects, which are handed around as executable blocks. She even pointed us to the GoF patterns Command and Strategy, so clearly, this has been around for a while and was considered OO back then. 
Did the GoF silently sneak in functional ideas? How dare they!

What I didn't quite get

Apart from these three good points, there was one I didn't get: Structural Sharing? What was it about, memory efficiency? Didn't she just tell us that we should rely on our compilers instead of doing their work?

Summary

So, all in all, it doesn't matter. Whether it's OO or functional, this is a good talk to watch. 
If you want to brush up on intermediate concepts of coding or see some examples of how things might get easier if you break your common patterns: Go watch!

Als Otto in Person von Sabrina Hauptman mich fragte, ob ich an den InnoDays 2016 teilnehmen wollte, war ich mir unsicher, wieviel mir das wirklich bringt. Schließlich hatte ich schon ähnliche Veranstaltungen besucht: Hackathon, FedEx Day und wie sie alle heißen. Und so richtig überzeugt war ich nie.

Erstmal sieht das alles ganz einfach und plausibel aus. Das Unternehmen möchte innovativer werden (wer möchte das nicht?) und bei vielen Entwicklern gibt es den starken Wunsch, die ganzen neuen Technologien mal auszuprobieren, die “da draußen” gerade angesagt sind.

Also lässt man der Kreativität der Entwickler mal einen oder zwei Tage lang freien Lauf und guckt mal, was herauskommt. Die Veranstaltungen sind dann auch durchweg cool. Man kann die Energie spüren und die Entwickler haben sehr viel Spaß. Das alleine ist für viele Unternehmen bereits eine große Anstrengung und respektiere den Versuch sehr.

Probleme mit vielen Hackthons

Meist ist es aber eben auch nicht mehr als ein erster Versuch:

  • Es gibt keinen definierten Prozess, was mit den Ergebnissen passiert. Es gibt lediglich die diffuse Hoffnung, dass irgendjemand “da oben” schon die Genialität der Ergebnisse erkennen und dann die Projekte in die Roadmap aufnehmen wird. Passiert aber nicht.
  • Die Ergebnisse sind meist extrem technisch (wer wollte nicht schon immer mal einen Hadoop-Cluster aufsetzen oder Docker in Docker laufen lassen?) und der Mehrwert für das Unternehmen bleibt unklar. Auch daher verwundert es mich nicht wirklich, dass niemand die Ergebnisse zu echten Projekten macht.
  • Radikale erste Ideen sind so gut wie immer Mist. So werden dadurch zu guten oder sogar großartigen Ideen, dass sie über einen längeren Zeitraum durch verschiedene Köpfe gehen und mit anderen Ideen kombiniert werden (siehe [Sawyer2008]). Das leisten die Hackathons, die ich gesehen habe, kaum oder gar nicht.

So sind viele Hackathons nicht viel mehr als ein Strohfeuer, als kurz aufflammt, aber letztlich keine Wirkung hat. Es kann sogar zu Frustration führen, wenn die Teilnehmer diese Wirkungslosigkeit erkennen.

Siehe zu dem Thema auch meinen Blogpost zum Konzept der Slack-Time, in dessen Kommentaren Markus Andrezak das Konzept als “Opium für die Massen” bezeichnet hat. Das trifft auch auf so manchen Hackathon zu: Tut er mehr, als die Entwickler zu beruhigen?

Otto InnoDays

Sicherlich ist es bereits eine bemerkenswerte Leistung, wenn ein Konzern wie Otto einen Hackathon wie oben beschrieben umsetzt und damit zeigt, dass sowas nicht den neuen hippen Internet-Unternehmen vorbehalten ist.

Trotzdem blieb für mich die Frage, was ich dabei lernen könnte.

Die ersten Otto InnoDays fanden 2015 statt – damals nur mit den internen Mitarbeitern der Entwicklung. 2016 wurden die InnoDays geöffnet für externe Mitarbeiter und für die Fachabteilungen. Das hört sich schon mal ganz gut an, weil es die Chance bot, aus dem reinen Techno-Fokus herauszukommen. Außerdem waren die Otto InnoDays 2016 sehr groß angelegt – mit einem Prozess, der letztlich zwei Wochen lang lief. Und die Organisation machte auch nicht irgendjemand “mal nebenbei”. Stattdessen stellte Otto nennenswert Kapazitäten für die Vorbereitung und durch Begleitung bereit. Die schienen es wirklich ernst zu meinen.

Damit waren meine Zweifel noch nicht ausgeräumt. Aber die Neugier begann zu überwiegen und so sagte ich zu.

Ziele und Ablauf der InnoDays 2016

Details… (folgen noch)

Otto InnoDays 2016: die ganze Geschichte

Dieser Blogpost ist ein Artikel in einer Blogpost-Serie.

Überblick über die ganze Blogpost-Serie…

Referenzen

  • [Sawyer2008] Keith Sawyer: “Group Genius: The Creative Power of Collaboration”

Ich war bei den Otto InnoDays 2016 dabei. Auf Twitter konnte man die Veranstaltung unter dem Hashtag #innodays2016 verfolgen.

Dieser Blogpost ist der Auftakt einer kleinen Blogpost-Serie zu dem Thema. Ich habe mit Otto vereinbart, dass ich hier keine Lobhudelei betreibe und offen alles beschreiben darf. Ich darf insbesondere auch das benennen, was in meinen Augen besser gemacht werden kann (lediglich persönliche Beleidigungen darf ich hier nicht veröffentlichen – will ich auch gar nicht:-). Und mir ist Einiges aufgefallen, was besser gemacht werden kann. Das sollte aber nicht den Blick auf das trüben, was schon erreicht wurde. Im Vergleich mit ähnlichen Veranstaltungen in anderen Firmen waren die Otto InnoDays 2016 auf jeden Fall ganz vorne mit dabei. Und wenn ich dann bedenke, dass Otto ein Konzern mit über 4.000 Mitarbeitern und kein eBusiness-Unternehmen mit nur 200 Mitarbeitern ist, ziehe ich meinen Hut. Von den Otto InnoDays 2016 können sich viele vermeintlich agilere Unternehmen eine große Scheibe abschneiden.

Ich plane, in den nächsten Wochen die folgenden Themen in dieser Blogpost-Serie zu behandeln:

Teil 1: Warum ich zuerst skeptisch war und dann doch teilgenommen habe.

Als Otto in Person von Sabrina Hauptman mich fragte, ob ich teilnehmen wollte, war ich mir unsicher, wieviel mir das wirklich bringt. Schließlich hatte ich schon ähnliche Veranstaltungen besucht: Hackathon, FedEx Day und wie sie alle heißen. Was sollte hier jetzt so besonders interessant sein – außer dass das Format jetzt auch von Konzernen praktiziert wird?

Details…

Teil 2: Die Ziele und der Ablauf

Die InnoDays sollten zum Querdenken anregen und auch Platz für disruptive Ideen bieten. Die InnoDays erstreckten sich insgesamt über zwei Wochen. Es begann mit einer Aufwärmphase, gefolgt von der Ideengenerierung. Die Ideen wurden dann gefiltert (Abstimmung mit den Füßen) und die selektierten Ideen prototypisch umgesetzt. Die Ergebnisse wurden zum Abschluss vorgestellt und von einer Jury bewertet. Außerdem gibt es einen definierten Prozess, wie die „Sieger“ in den Roadmap-Planungsprozess eingespielt werden.

Details…

Teil 3: Der Kulturwandel

Der Otto-Konzern will sich modernisieren und fit für das digitale Zeitalter werden. Dazu sind im Bereich eCommerce in den letzten Jahren beachtliche Erfolge erzielt werden. Skaliertes Scrum mit mit einer auf Verticals basierenden System-Architektur stellt die technische Basis bereit, auf der Business Agility (um mal ein Buzz-Word zu benutzen) möglich wird.

Die InnoDays sind ein weiterer Schritt in Richtung einer auf agilen Werten beruhenden Unternehmenskultur. Bei den InnoDays habe ich erlebt, wo diese Schritte bereits mutig gegangen werden, aber auch wo alte Strukturen immer wieder hervorbrechen.

Details…

Teil 4: Die Projekte

Es wurden insgesamt 18 Projekte durchgeführt. Ich habe mir diese inhaltlich angesehen und nach verschiedenen Kriterien klassifiziert. Wieviele disruptive Ideen waren bei den Projekten wirklich dabei? Wieviele Projekte basierten auf spinnerten Ideen, die sich am Ende nicht umsetzen lassen würden? Welche Projekte sprechen neue Zielgruppen an und welche optimieren „nur“ den existierenden Service für die existierenden Kunden? Und woran liegt es, dass es so gekommen ist, wie es gekommen ist?

Details…

Teil 5: Die großen Filter

Bei den InnoDays wurden Ideen bzw. Projekte mehrfach gefiltert. Zuerst wurden die Ideen durch Abstimmung mit den Füßen gefiltert. Bei der Abschluss-Präsentation wurden die Sieger herausgefiltert. Diese gehen in einen Roadmap-Planungsprozess, in dem erneut gefiltert wird.

Dieses Vorgehen entspricht dem Stand der Kunst für solche Veranstaltungen. Allerdings ist der Ansatz aus meiner Sicht noch nicht optimal. Radikale Ideen sind in ihrer ersten Fassung fast immer Mist. Sie müssen durch verschiedene Köpfe wandern, dort verändert und mit anderen Ideen kombiniert werden. Dann können wirklich großartige Dinge entstehen. Der große Filter-Ansatz verhindert diesen Prozess und tendiert daher dazu, mittelmäßige Ideen Realität werden zu lassen. Das ist schon mal nicht so schlecht, weil immerhin die schlechten Ideen ausgesiebt werden. Es geht aber viel besser: Diverge & Merge ist leistungsfähiger als Diverge & Filter.

Details…

Teil 6: Ein paar Thesen

Aus den InnoDays können Otto und andere Unternehmen viel lernen – auch über die internen Prozesse und Strukturen. Ich habe dazu ein paar Thesen zusammengeschrieben. Vielleicht helfen diese, in der Zukumft noch bessere InnoDays zu gestalten.

Details…

Teil 7: Zusammenfassung

Think, Create, Learn: das war das Motto das Otto InnoDays 2016. Wie stellt sich die ganze Veranstaltung rückblickend vor dem Hintergrund dieses Mottos dar?

Details…

 

 


Tagged: Innovation, Scrum

Recently I was reminded about a blog entry from Kent Beck way back in 2008. He called the method he discovered during pairing the Saff Squeeze after his pair partner David Saff. The general idea is this: Write a failing test on a level that you can, then inline all code to the test, and remove everything that you don’t need to set up the test. Repeat this cycle until you have a minimal error reproducing test procedure. I realized that this approach may be used in a more general way to enable faster feedback within a Sprint’s worth of time. I sensed a pattern there. That’s why I thought to get my thoughts down while they were still fresh – in a pattern format.

Testing inside one Sprint’s time

As a development team makes progress during the Sprint, the developed code needs to be tested to provide the overall team with the confidence to go forward. Testing helps to identify hidden risks in the product increment. If the team does not address these risks, the product might not be ready to ship for production use, or might make customers shy away from the product since there are too many problems with it that make it hard to use.

With every new Sprint, the development team will implement more and more features. With every feature, the test demand – the amount of tests that should be executed to avoid new problems with the product – rises quickly.

As more and more features pile up in the product increment, executing all the tests takes longer and longer up to a point where not all tests can be executed within the time available.

One usual way to deal with the ever-increasing test demand is to create a separate test team that executes all the tests in their own Sprint. This test team works separately from the new feature development, working on the previous Sprint’s product increment to make it potentially shippable. This might help to overcome the testing demand in the short-run. In the long-run, however, that same test demand will pile further up to a point where the separate test team will no longer be able to execute all the tests within their own separate Sprint. Usually, at that point, the test team will ask for longer Sprint lengths thereby increasing the gap between the time new features were developed, and their risks will be addressed.

The separate test team will also create a hand-off between the team that implements the features, and the team that addresses risks. It will lengthen the feedback between introducing a bug, and finding it, causing context-switching overhead for the people fixing the bugs.

In regulated environments, there are many standards the product should adhere to. These additional tests often take long times to execute. Executing them on every Sprint’s product increment, therefore, is not a viable option. Still, to make the product increment potentially shippable, the development team needs to fulfill these standards.

Therefore:
Execute tests on the smallest level possible.

Especially when following object-oriented architecture and design, the product falls apart into smaller pieces that can be tested on their own. Smaller components usually lead to faster execution times for tests since fewer sub-modules are involved. In a large software system involving an application server with a graphical user interface and a database, the business logic of the application may be tested without involving the database at all. In hardware development, the side-impact system of a car may be tested without driving the car against an obstacle by using physical simulations.

One way to develop tests and move them to lower levels in the design and architecture starts with a test on the highest level possible. After verifying this test fails for the right reasons, move it further down the design and architecture. In software, this may be achieved by inlining all production code into the test, and after that throwing out the unnecessary pieces. Programmers can then repeat this process until they reached the smallest level possible. For hardware products, similarly focued tests may be achieved by breaking the hardware apart into sub-modules with defined interfaces, and executing tests on the module-level rather than the whole product level.

By applying this approach, regulatory requirements can be broken down to individual pieces of the whole product, and, therefore, can be carried out in a faster way. Using the requirements from the standards, defining them as tests, and being able to execute them at least on a Sprint cadence, helps the development team receive quick feedback about their current process.

In addition, these tests will provide the team with confidence in order to change individual sub-modules while making sure the functionality does not change.

This solution will still provide an additional risk. By executing each test on the smallest level possible, and making sure that each individual module works correctly, the development team will sub-optimize the testing approach. Even though each individual module works correctly according to its interface definition, the different pieces may not interact with each other or work on varying interface definitions. This risk should be addressed by carrying out additional tests focused on the interfaces between the individual modules to avoid sub-optimization and non-working products. There will be fewer tests for the integration of different modules necessary, though. The resulting tests will therefore still fit into a Sprint’s length of time.

PrintDiggStumbleUpondel.icio.usFacebookTwitterLinkedInGoogle Bookmarks

In this post, I will explore what makes a good Development Team in Scrum from an organizational perspective.

Great teams

When it comes to structuring the Development Team, the Scrum Guide tells us that the team is “cross-functional, with all of the skills as a team necessary to create a product increment” and that the team should be structured to “organize and manage their own work”. Finally, we learn that the team should have between 3 and 9 developers, to be able to contain all the skills required while not increasing management complexity beyond the level self-organization can casually deal with.


Over the past few years, I have helped some clients to improve their teams’ setup and structure, and found that there are more criteria that the Scrum Guide only hints at but never mentioned explicitly.

Have few dependencies

While Scrum expects the team to be able to provide the product increment on their own, corporate reality often sees products with a scope much larger than what a single team can produce in reasonable time.
You may have read about scaling approaches like LeSS, and their call for feature teams. Rare is the product, though, where splitting into features is possible the moment your start with Scrum.


To alleviate the situation, I look for a setup where inter-team dependencies are as few as possible. In an ideal case, this means that the team has one team to receive input from and one more team to deliver output to.
For work to flow, the team has to be independent from other teams. Define your teams so that their scope covers a large, uninterrupted piece of the value chain, and integrate technical concerns as much as possible.
That way, you will reduce the number of handovers both in planning and in the actual work done.

Stay together

In a corporate environment, projects start and stop all the time. Project managers vie for resources, eyes on the deadline, not on the social effects. For team members, this means that today’s colleague may be gone tomorrow – so each of them looks out for their own piece of work first and foremost.
When introducing Scrum in such an environment, I emphasize the twin values of stability and reliability.
It takes a stable environment for employees to benefit from investing in the team’s success rather than their own achievements. Team members learn to trust each other over time and start to co-operate. In a stable teams, members can learn each other’s abilities and preferences – and that’s what it takes to get better.
Reliability, meanwhile, means that team members are on the team, period. Splitting people between projects eats up time beyond the numbers in your spreadsheet, as it introduces interruptions in the form of urgent requests from outside projects.
Now, any corporation worth the name will have a complex resource distribution in place already, and you will be hard pressed to change it all at once. What to do?
Allot people to the product as much as possible, and encourage them to block fixed slots of time for working on the product. If necessary, suggest they turn off phones, chat and mail clients; empower them to postpone dealing with incoming requests till team time is over.
That way, their team can learn rely on them, and teamwork improves.

Learn from each other

Remember what the Scrum Guide said? “Teams organize and manage their own work.”
It takes a certain level of experience to do that, and I have frequently heard managers or Scrum Masters complain that their team is not yet ready for this level of freedom – and next coordinate the work themselves, trampling the sprouts of self-organization.
So, it takes experience for the team to self-organize, and it takes laissez-faire.
The best teams I worked with did not consist of veterans only, though, since less experienced team members bring two crucial factors to the team: By embracing their curiosity, they challenge long-standing practices; by working with several of their more senior colleagues, they foster communication.
So, when building a team, I look for a good mix of old hands and new to make learning happen and self-organization possible, while still having all the skills to actually build the product.

Make work flow

This concludes my thoughts about criteria to watch out for when building great teams:
reliabilty, stability, experience, cross-functionality and independence.


So, when you are next thinking about how to improve your team, just think “Teams R SEXI”, and you are well on your way:


Reliable
Stable
Experienced adequately
X-functional and
Independent


Mind, though, that these criteria are strongly skewed towards the organizational perspective and pay no heed to the more mushy, social criteria that are well worth considering as well.

What criteria do you apply to find your teams? Did you go at it from an entirely different angle? 
I am looking forward to your comments.
In this post, I explore the Scrum value of focus, the notion of focus time and its implications for your Scrum team’s setup.

Focus as a Scrum value

Focus. It is one of the core values of Scrum, reminding us to be precise about our development goals and to stay on target at all times.
Focus tells Product Owners to build one product for a well defined audience, tells Scrum Masters to help their teams improve in one area at a time, and tells the Development Team to go for the Sprint goal and only the Sprint goal.
It is them - the team - I want to focus on in this post, although the thoughts apply to the other roles as well.

Focus time

Recently, a client introduced me to the notion of focus time. By his definition, focus time is the part of a Sprint that the team actually spends working on product backlog items directly contributing to the Sprint goal.
Thus, focus time is comprised of all the work required to get one specific product backlog item from “ready” to “done”, including design, implementation, testing, documentation and posing the odd question to the Product Owner or relevant stakeholders.
It is not: Helping to prepare the next Sprint, demonstrating the product increment in the review, improving the process in the retrospective or organizing the team in the Daily Scrum.1 While all of these activities are important, they are by definition off-focus, as they do not directly improve the product2.

Perfect world

In an ideal setup, the team spends most of their time focused. The Scrum Guide allows for up to 10% of the development team’s time to be taken for product backlog refinement3, while planning, review and retrospective take up 10% of the team’s time yet again.
Another 5% are eaten up by Daily Scrums and general communication with people in- and outside of our product: Even the best of bosses and stakeholders need some attention from time to time.

This leaves our ideal development team with
100% Sprint time
- 10% refinement
- 10% planning, review, retrospective
- 5% Daily Scrum and communication
=  75% focus time

That’s 7½ workdays out of your two week sprint! Contrast that with the common complaint that Scrum is all talking and keeps people from getting things done.

Half measures

Rare, however, is the team where all team member contribute all day, every day. The larger an organization, the larger its tendency to split people’s time and attention over several projects or products.
25 years ago, in 'Quality Software Management: Systems Thinking', Gerald Weinberg introduced us to the hidden cost of context switching, claiming that an even split across two projects left a knowledge worker with only 40% of his or her capacity to apply to each of them. (While Weinberg's original text is not available for free, his idea has spread beyond the cover of his book.)

So, let’s split some people and look at their focus time:
50% allotted time
- 10% context switching
- 5% refinement
- 10% planning, review and retrospective
-  5% Daily Scrum and communication
= 20% focus time

In case you’re wondering, I left the numbers for the four main Scrum rituals at their original value since all teams I have worked with so far insisted that their split members take part in all of them, since this is where the team makes all major decisions.
So, out of a promised 50% of somebody’s time and attention, only 20% contribute directly to the goals we set for the product – that’s hardly more than half of the 37.5% we might expect when looking at the original calculation.
Now, suddenly, Scrum is all talk and no action.

Said and done

So, the idea of focus – concentrating on what we ought to do – lead us to the notion of focus time, the time spent directly contributing to the Sprint goal. We have seen that ¾ of a team’s time could be spent focused, and that this number shrinks drastically when we split people’s time across products.

Now, I’d like to invite you to examine the time your team spends focused.
Do you get close to ideal numbers? Are you in the 20% range, even though everyone is on the project 100% of their time? Where does the time people don’t spend focused go and which of these activities are really necessary?
All in all: What could you do to improve the numbers?

Besides looking at things that distract the team, you might want to look at the product backlog items your team and Product Owner agree upon for the sprint. Is there a common theme, so that most or all of them contribute to a single goal?
If they have little in common, one could argue, there is not really one Sprint goal, and the team could never reach the lofty numbers I laid out above. However, there could be leverage in the way the Product Owner prioritizes the product backlog or the way you plan your sprints.

Let’s talk about your numbers and your findings in the comments!

Postscriptum

Writing this post brought to my mind a number of questions, among them:
“What keeps teams from focussing and how to get back there quickly?”, “How to show the value of Scrum’s rituals to the ‘rather do than talk’ faction?”, and “Is focus time a good measure for the quality of a Scrum process? What other metrics are there?”

Comment below if you have thoughts on any of these questions or are particularly interested in one of them.

1 Neither is it taking part in communities of practice, performance reviews, answering mails, general discussion or undirected learning, along with most other things team members like to do.

2 Actually, this is the main reason that some developers tend to resent them. Hackers want to do, not talk. As a Scrum Master, you could do worse than to think about this before your colleagues bring it up.

3 Note that this is not the established practice of a “refinement meeting”, but rather includes all activities dealing with the process of refining the product backlog.

Last year, I interviewed Jerry Weinberg on Agile Software Development for the magazine that we produce at it-agile, the agile review. Since I translated it to German for the print edition, I thought why not publish the English original here as well. Enjoy.

Markus:
Jerry, you have been around in software development for roughly the past 60 years. That’s a long time, and you certainly have seen one or another trend passing by in all these years. Recently you reflected on your personal impressions on Agile in a book that you called Agile Impressions. What are your thoughts about the recent up-rising of so called Agile methodologies?

Jerry:
My gut reaction is “ Another software development fad.” Then, after about ten seconds, my brain gets in gear, and I think, “Well, these periodic fads seem to be the way we advance the practice of software development, so let’s see what Agile has to offer.” Then I study the contents of the Agile approach and realize that most of it is good stuff I’ve been preaching about for those 60 years. I should pitch in an help spread the word.

As I observe teams that call themselves “Agile,” I see the same problems that other fads have experienced: people miss the point that Agile is a system. They adopt the practices selectively, omitting the ones that aren’t obvious to them. For instance, the team has a bit of trouble keeping in contact with their customer surrogate, so they slip back to the practice of guessing what the customers want. Or, they “save time” by not reviewing all parts of the product they’re building. Little by little, they slip into what they probably call “Agile-like” or “modified-Agile.” Then they report that “Agile doesn’t make all that much difference.”

Markus:
I remember an interview that you gave to Michael Bolton a while ago where you stated that you learned from Bernie Dimsdale how John von Neumann programmed. The description appeared to me to be pretty close towards what we now call test-driven development (TDD). In fact, Kent Beck always claimed that he simply re-discovered TDD. That made me wonder, what happened in our industry between 1960s and the 2000s that made us forget the ways of smart people. As a contemporary witness of these days, what are your insights?

Jerry:
It’s perfectly natural human behavior to forget lessons from the past. It happens in politics, medicine, conflicts—everywhere that human beings try to improve the future. Jefferson once said, “The price of liberty is eternal vigilance,” and that’s good advice for any sophisticated human activity.

If we don’t explicitly bolster and teach the costly lessons of the past, we’ll keep forgetting those lessons—and generally we don’t. Partly that’s because the software world has grown so fast that we never have enough experienced managers and teachers to bring those past lessons to the present. And partly it’s because we don’t adequately value what those lessons might do for us, so we skip them to make development “fast and efficient.” So, in the end, our development efforts are slower and more costly than they need to be.

Markus:
The industry currently talks a lot about how to bring lighter methods to larger companies. Since you worked on Project Mercury – the predecessor for Project Apollo from the NASA – you probably also worked on larger teams and in larger companies. In your experience, what are the crucial factors for success in these endeavors, and what are the things to watch out for as they may do more harm than good?

Jerry:
In the first place, don’t make the mistake of thinking that bigger is somehow automatically more efficient than smaller. You have to be much more careful with communications, and one small error can cause much more trouble than in a small project.

For one thing, when there are many people, there are many ways for new or revised requirements to leak into the project, so you need to be extra explicit about requirements. Otherwise, the project grows and grows, and troubles magnify.

It is very difficult to find managers who know how to manage a large project. Managers must know or learn how to control the big picture and avoid all sorts of micromanagement temptations.

Markus:
A current trend we see in the industry appears to evolve around new ways of working, and different forms to run an organization. One piece of it appears to be the learning organization. This deeply connects to Systems Thinking for me. Recognizing you published your first book on Systems Thinking in 1975, what have you seen being crucial for organizations to establish a learning culture?

Jerry:
First of all, management must avoid building or encouraging a blaming culture. Blame kills learning.

Second, allow plenty of time and other resources for individual learning. That’s not just classes, but includes time for reflecting on what happens, visiting other organizations, and reading.

Third, design projects so there’s time and money to correct mistakes, because if you’re going to try new things, you will make mistakes.

Fourth, there’s no such thing as “quick and dirty.” If you want to be quick, be clean. Be sure each project has sufficient slack time to process and socialize lessons learned.

Finally, provide some change artists to ensure that the organization actually applies what it learns.

Markus:
What would you like to tell to the next generation(s) of people in the field of software development?

Jerry:
Study the past. Read everything you can get your hands on, talk to experienced professionals, study existing systems that are doing a good job, and take in the valuable lessons from these sources.

Then set all those lessons aside and decide for yourself what is valuable to know and practice.

Markus:
Thank you, Jerry.

PrintDiggStumbleUpondel.icio.usFacebookTwitterLinkedInGoogle Bookmarks

Most of the or rather all Agile folks love to work with flipchart when presenting or workshoping. Some are so obsessed with the “flipchart-marker-visual-facilitation-universe” that you could think they have a Neuland tatoo. :) Or they want to have a flipchart even at their home. Like me! ;) In this post I will write about […]
Letzte Aktualisierung des Planeten:
27.06.2016
18:05 UTC