A few months ago, I had the opportunity to join Craig Larman at a client for an informed-consent workshop on Large-scale Scrum (LeSS). Ever since I took his class in 2015, I was interested in how he starts off a LeSS adoption – or potential LeSS adoption, I should say. He asked me to do a write-up.

We had overall four days at the client. The first day was half Legacy TDD and half Impact Mapping. For day two and three we were off-site from the client with about 30 employees from different departments including finance and controlling, organizational development, and the CEO. The final fourth day we spent back at the client answering questions, and a three hours all-hands introduction to LeSS.


Craig shared a google document with me, that he also shared with the client. There were a couple of things he made sure up-front, including a description of the company, questions from client to Craig, from Craig to client, and the invitation mail to the workshop participants among the motivation part for LeSS. That was interesting to me since I usually do that verbally or on the phone, whereas a shared document appeared straightforward to me.

Half-day Legacy TDD

On the first day, we spent the morning on some legacy code. We had about 10 developers joining us. Craig gave a short introduction to TDD and the difference to Legacy TDD. Then we worked in a mob with a rotation of a few minutes. In those four hours, we managed to write two tests for one of their legacy PHP classes. The first test appeared a bit daunting, but Craig also explained lots of things around dependencies, how to deal with them, differences between stubbing and mocking, etc., etc. It paid off for the second test, as we were able to write that faster since we already had some scaffolding from the first test in place.

Half-day Impact Mapping

In the afternoon, we spent another four hours with the product management group. Most of them were called Product Owner, but also the CEO and one of their Scrum Masters joined us. Craig explained the general approach and had the group identify a goal. Then the group worked through the impacted persons towards the impact that they needed to achieve there in order to get closer to the goal. We also mapped some of the deliverables that were already in their backlogs. At the end of the day, the group took a longer time to agree on a goal, so that we didn’t have time to actually estimate the impacts, and the deliverables, so that Craig explained the steps that were missing.

What strikes me about the first day was that Craig spent a day at gemba in a value-adding way. He worked with the programmers on their production code in the morning, and with the product group in the afternoon. In both cases, he was able to add value to different groups while getting an overview of how work is done at the client.

Workshop Day 1

In general I had a pretty good impression of the workshop. Craig led through the Thinking Tools behind LeSS. He started off with Occupational Psychology, and how fear hinders any adoption. We worked in great length through the different kinds of waste, queueing theory, and created Causal Loop Diagrams for a couple of dynamics that were the main motivation behind the elements in LeSS. Craig avoided to mention anything LeSS specific on day one. Instead he made the point for one backlog per product quite clear, in that the dynamic with more backlogs, i.e. one backlog per team, leads to sub-optimal identification with the customer. Compared to the course I took in February 2015, I found that the argumentation followed a clear line. Thus I could see the improvements to the material in action since then.

Workshop Day 2

On the second day, we dived deeper into LeSS with a short introduction on the framework as well as some adoption notes. We spent the second half of the second day with Craig’s deep dive into questions that had come up. Anyone who has attended Craig’s course probably knows what I’m talking about. Craig spent lots of time to get deep into the answer so that participants grasped the context of the answers and thoughts behind LeSS.

Day 3 Q&A and Informed Consent

On day 3 we spent the morning with other potential coaches and some volunteers back in the client’s office. We worked through some open questions, and then Craig urged the group to come up with next tiny steps to do before starting bigger changes in a month or two. He also ran an anonymous non-committing poll on whether the group wanted to go with LeSS. There were two objectors, eight in favor (iirc), and the majority of the group would go along, i.e. not resisting, but also not actively pushing forwards. Personally I think they identified way more next steps as I would have called suitable for the one or two months timeframe.

After that I had to leave for my train ride home, but Craig also did a whole company introduction to LeSS. I don’t know what happened there, though.

The overall goal of the four days was to make an informed decision at the client to either go for LeSS or something else. Craig made sure that the client made the decision where to go, and made sure that people in the workshop could understand potential side-effects. Together with the Causal Loop Diagrams they should be able now to evaluate their future struggles.

PrintDiggStumbleUpondel.icio.usFacebookTwitterLinkedInGoogle Bookmarks

Eben gerade habe ich dem Webinar von Jeff Sutherland und Ken Schwaber gelauscht, in dem die aktuellen Änderungen am Scrum-Guide erläutert wurden. Viel hat sich nicht geändert, aber ein Abschnitt zu den Scrum-Werten wurde hinzugefügt, im englischen Original findet er sich hier.

Wie zu erwarten sind es die folgenden fünf Werte geworden, die bereits von Ken Schwaber in seinem ersten Scrum-Buch veröffentlicht wurden. Interessanter Weise haben Ken und Jeff eine Reihenfolge gewählt, um die Werte zu erläutern, auch wenn sich diese Reihenfolge nicht im Scrum-Guide wiederfindet.

  • Commitment: Das Commitment des Teams und des Managements sind nötig, damit Dinge wirklich angegangen werden und wirklich erledigt werden mit voller Energie. Das Team versucht nicht nur das Sprintziel zu erreichen, es ist darauf committet!
  • Fokus: Erfolgreich kann man nur sein, wenn man sich auf seine Aufgabe fokussiert (und auch ein Commitment ohne Fokus ist vermutlich nichts wert).
  • Offenheit: Gerade für Wissensarbeit gilt, dass Offenheit mehr Optionen schafft und eine größere Chance darauf bietet, dass man gemeinsam eine bessere Lösung findet. Es sollten auch alle wissen, was gerade das größte Problem ist, damit jeder zur Lösung beitragen kann. 
  • Respekt: In gewisser Weise ist Respekt eine Voraussetzung für Offenheit. Denn nur, wer sich respektiert fühlt, wird auch offen zu seinen Schwächen stehen können.  
  • Mut: Offenheit, Veränderung und das Ansprechen von Problemen benötigt Mut. Zudem benötigt auch der Einsatz von Scrum Mut, weil man die vielen Vorteile von Scrum nicht ohne den Nachteil großer kultureller Veränderungen in der Organisation bekommt. 

It’s been a while since I wrote code these days. Back in late April however I found myself in a Coding Dojo at the Düsseldorf Softwerkskammer meet-up working on the Mars Rover Kata. I have a story to share from that meeting. However, since I tried to reproduce the code we ended up with that night, and I decided to give JUnit5 (and Java8) a go for that, I ended up with a struggle.

Back in the days with JUnit4 I used the ParameterizedRunner quite often to use data-driven tests. I never remembered the signature of the @Parameters function, though. The Mars Rover Kata also includes some behavior that I wanted to run through a data-driven test, but how do you do that in JUnit5? I couldn’t find good answers for that on the Internet, so I decided to put my solution up here – probably for lots of critique.

Please note that I used JUnit 5.0.0-SNAPSHOT which is a later version than the alpha, but probably not the final one.

JUnit5 offers besides Java 8 capabilities some interesting new things. JUnit5 comes now with Extension capabilities where you may influence the test’s lifecycle and also ways to resolve parameters to your tests, and your test class constructors. And then there are TestFactories for DynamicTests. Woha, quite a lot new stuff.

First I tried stuff with parameter resolvers. But then I would have needed to keep track of the parameters, and I had to call the parameter resolver more than once. So, combining it with an extension might work? No, I couldn’t make that work. So, dynamic tests are the way to go.

So, here is an example for what I ended up with. We have a Direction class with a method called turnLeft(). The idea is if the Rover is headed NORTH, and turns left (by 90 degrees) then it will be facing WEST.

Some notes:

  • I kept a collection of test data in a field in line 17. This is somewhat similar to the old way you annotated a function with @Parameters in JUnit4, even though you can now get rid of the Object[], and use a private test data class per test class. That at least seems to be the solution that I preferred.
  • For the @TestFactory you have several possibilities. I decided to use the Stream return type here in line 28. As I haven’t programmed too much in Java 8, I am not sure whether my usage is appropriate here. The conversion of the testData from the Collection is quite straight-forward, I found.
  • For each operation I wrapped the assertion in line 36 to avoid making the call to dynamicTest more convoluted than necessary. I also decided to generate a descriptive string for each test with the method in line 32. I think you can come up with better ways to generate the test descriptions. Wrapping the assertion on seemed unavoidable though. I especially didn’t like the usage of the lambda-expression together with the aggregate expression seems to make the line with dynamicTest (line 29) less readable than I would like to. I think there is more improvement possible.
  • Note that you can have several @TestFactory methods on your test class. So when writing a test for turning right, you can provide another TestFactory and reuse the test data for that. I’ll leave that as an exercise for the inspired reader of my blog.

So, this is what I ended up with. I think there is still room for improvement, especially when you compare the result with stuff you might write in tools like Spock.

P.S.: I ran this through Marc Philipp – one of the JUnit5 originators – in an earlier version, and he told me that they will be working on a more elegant solution for data-driven tests, probably for one of the next releases of JUnit5.

PrintDiggStumbleUpondel.icio.usFacebookTwitterLinkedInGoogle Bookmarks

Usually I don’t write many promotions for other’s contents on this blog as I try to keep it personal and focused on my personal views. Recently I was contacted on the International 2016 State of Testing report, and whether I would like to do a write-up about it. I asked whether it would be ok to post a personal view, so here it is.

Demographics – and what do they tell me?

The top areas from the report are Europe (& Russia), USA, and India. I think these are also the biggest areas when it comes to software testing. The demographics tell me that the data according to my impressions is not very biased but well-spread.

About a third of the respondents work across four different locations. Another third work in a single location. My personal view on this is that there is a good mix of testers working in one location, and way more spread across different locations. I think this might stem from different out-sourcing companies as well as companies working across different sites for various reasons – even though this usually makes the formation of real teams hard – at least in my experience.

Most of the respondents have working experience of five years or more. I think the majority of testers new in the field usually don’t get immediately their attention on such kind of surveys. I think this is tragic, as in the long run we should be working on integrating people new to the field more easily.

There also appear many test managers in the survey data. This seems quite unusual to me, as there certainly are way more testers than test managers – I hope. This usually raises the question to me how come there are so few testers passionate about their craft. In some way this is tragic, but it resembles the state of the industry.

Interestingly on time management, most time of the testers seems to be spent on documentation (51%) and dealing with environments (49%). That’s sort of weird, but also resembles my experiences with more and more open source tools, and more and more programmers not really caring how their stuff can be tested or even brought to production. On the other hand I notice many problems with test data-centric automation approaches, where handling test data appears to be the biggest worry in many organization. I usually attribute that to bad automation, as an automated tests usually should be easy to deal with, and create its own test data set that it operates on – a problem well-addressed in the xUnit Test Patterns book in my opinion – but few people appear to know about that book.

Skills – and what you should look out for?

Which sort of transitions my picture to the skills section. Testers appear to use a couple of approaches, foremost Exploratory Testing with 87%. There are also 60% mentioning they use scripted testing. This also matches my experience since testing rarely is purely Exploratory or purely scripted. I think the majority of testers claiming they use Exploratory Testing is either a signal of the rise of context-driven testing in general, or a bias in the data. I think it’s more of the former.

I liked that test documentation gets leaner. With the former 51% of the spare time of testers spent with documentation, this is certainly a good thing. At the conferences I attend I see more and more sessions on how to use mindmaps for that. Quite a third of the respondents said they already used mindmaps. I think that’s a good signal.

Even though the authors claim that formal training is on the raise when it comes to skills of testers, and their respective education, there are still many testers trained through training on the job and mentoring, as well as learning from books and online resources. I think this is a good trend, since I doubt that formal training will be able to keep up with transferring skills in the long run. They can inspire testers to dive deeper into certain topics, but on-the-job training and mentoring, as well as active reflection from material that you read, is a good thing, and way more powerful.

Unsurprisingly communication skills are the number one necessary skills for testers (78%). The next skillset that a tester needs according to the survey is on functional testing and automation, web technologies, and general testing methodologies. That resembles sort of my past as a tester, and which skills I put efforts into. Unsurprisingly 86% of the respondents claimed that they have test automation in place.

More Agile – less concerned

It seems that waterfall approaches are on the decline, even in the testing world. In 2015 42% mentioned they used Waterfall. In 2016 it were only 39%. 82% responded they used Agile – maybe every once in a while.

Even though the testing community usually is concerned from the historic background on their job safety, this uprise of Agile methodologies didn’t lead to more testers being concerned. Compared to 2015 where 42% were not concerned about their job, in 2016 there are 53% of the folks unconcerned. Probably that might be related to more context-driven approaches being more wide-spread.

This is just a summary with certain picks from myself. I encourage to dive into the State of Testing survey report on your own to get more details.

PrintDiggStumbleUpondel.icio.usFacebookTwitterLinkedInGoogle Bookmarks

“Think. Create. Learn”: das war das Motto das Otto InnoDays 2016. Wie stellt sich die ganze Veranstaltung rückblickend vor dem Hintergrund dieses Mottos dar?

Motto: Think. Create. Learn.

“Think. Create. Learn.”: das war das Motto der InnoDays. Think und Create habe ich überall gesehen und gespürt. Das Lernen waren meiner Meinung nach etwas unterbelichtet. Sicherlich wurde viel auf technischer Ebene gelernt, auch bzgl. Machbarkeit (z.B. 3D-Druck von Gutscheinen). Und die Teams haben auch quantitativ etwas über den Markt gelernt. So hatten viele Teams Zahlen, Daten, Fakten über den Markt und bestimmte Nutzergruppen in ihrer Abschlusspräsentation. Mir fehlte noch das qualitative Lernen über den Markt – welche Probleme und Bedürfnisse haben real existierende Menschen (und nicht irgendein Durchschnitt) wirklich?


Für mein Gesamt-Fazit zu den Otto InnoDays wiederhole ich das, was ich bereits ganz am Anfang geschrieben habe:

Im Vergleich mit ähnlichen Veranstaltungen in anderen Firmen waren die Otto InnoDays 2016 auf jeden Fall ganz vorne mit dabei. Und wenn ich dann bedenke, dass Otto ein Konzern mit über 4.000 Mitarbeitern und kein eBusiness-Unternehmen mit nur 200 Mitarbeitern ist, ziehe ich meinen Hut. Von den Otto InnoDays 2016 können sich viele vermeintlich agilere Unternehmen eine große Scheibe abschneiden.

Ich habe an einigen Stellen deutlich etwas zu mäkeln gehabt (Einbindung von Endkunden, Vernetzung von Ideen), aber das ist im Vergleich zu anderen Unternehmen Jammern auf hohem Niveau. Ich hoffe, dass meine Mäkeleien dazu führen, dass die Otto InnoDays 2017 noch cooler werden als die Otto InnoDays 2016. Und dann lasse ich mich gerne wieder dazu einladen.

Mehr zu den Otto InnoDays 2016

Die weiteren Blogposts in dieser Blogpost-Serie finden sich hier.

After a long time of agility and philosophy, here's something technical for a change.


Yesterday, I was talking OO and functional programming with Alex Chythlook, who told me that some problems I faced in Anathema could have been solved much easier applying functional programming idioms.
When I asked for details, he pointed me to a talk Jessica Kerr (@jessitron) held at GOTO 2014.
Go here in case you're watching the talk and would like to actually see the slides.


All of her points are well made, and she's an entertaining presenter. I am happy that she pointed out some things that I didn't have names for previously.
However, I am surprised that many of these ideas are functional, as they just appear to be thorough applications of OO principles to me. If they have been functional all the time, then it is certainly good to call them thus.

What stood out

"Errors are data too" Jessica said and told us to "not interrupt the execution flow". 
I've long spoken out against checked exceptions, and this nails it: If I treat an error as data, I am forced to deal with it. Right now, just where I would process the good result. No handing the exception outward, no wrap-to-catch-later.
That way, Exceptions are limited to things that I, as a programmer, have not foreseen, and that's the way they are meant to be used.

Idempotence, something she touches on in the very end, is quite important, too. Good to have a word, now. It describes the idea that an executable block of code (so, a function or a Single Abstract Method object) should change the world only once, even if it's called multiple times.
From her talk, I picked up the notion that this should be true for every function, and I don't fully agree with that because complexity increases when I build things out of things that build things, even if those are just functions.

The third main takeaway was her call for Lazy Evaluation. Doing that forces you to think in Objects or First Order Functions, and prevents procedural style – and that should always be a good thing.

What appears to be OO

I first stumbled when Jessica introduced a service object to limit a function's access to the database. Isn't that an application of the interface segregation principle (ISP) and the Single Responsibility Principle (SRP)? 
Smalltalk people always told me that even in Java, the client should define the exposed interface, and this is just that: If I need a service that just inserts, the method should have access to only that.

Next, she speaks about specific typing - again, isn't that just OO? If something represents a name, I should call it "Name", not "String". The DDD people have said it since 2004 and hardly anyone listened, so Jeff Bay had to point it out again in his paper on Object Calisthenics some seven years ago, calling on us to "wrap all primitives and Strings" (and numeric objects, of course).
Applying this principle in classes and productive code, I can well say that it is a game changer - your code becomes much more clear and expressive this way, even more so when you apply his next rule – "use first order collections" - as well.
Back to Jessica: Good on her picking up on that, I fully agree. It surprised me, though, that she watered down the principle by us to Options and Tuples later on in her talk. Those two are never domain specific, so effectively, she just put two new primitive types on our plate. 
More recent languages have more elegant ways of expressing these concepts, and they might feel more at home there.
I was surprised once more when she presented the lack of concern about when or how my code is executed as a principle of functional programming. With the OO concept of encapsulation comes the idea of Single Abstract Method objects, which are handed around as executable blocks. She even pointed us to the GoF patterns Command and Strategy, so clearly, this has been around for a while and was considered OO back then. 
Did the GoF silently sneak in functional ideas? How dare they!

What I didn't quite get

Apart from these three good points, there was one I didn't get: Structural Sharing? What was it about, memory efficiency? Didn't she just tell us that we should rely on our compilers instead of doing their work?


So, all in all, it doesn't matter. Whether it's OO or functional, this is a good talk to watch. 
If you want to brush up on intermediate concepts of coding or see some examples of how things might get easier if you break your common patterns: Go watch!

Als Otto in Person von Sabrina Hauptman mich fragte, ob ich an den InnoDays 2016 teilnehmen wollte, war ich mir unsicher, wieviel mir das wirklich bringt. Schließlich hatte ich schon ähnliche Veranstaltungen besucht: Hackathon, FedEx Day und wie sie alle heißen. Und so richtig überzeugt war ich nie.

Erstmal sieht das alles ganz einfach und plausibel aus. Das Unternehmen möchte innovativer werden (wer möchte das nicht?) und bei vielen Entwicklern gibt es den starken Wunsch, die ganzen neuen Technologien mal auszuprobieren, die “da draußen” gerade angesagt sind.

Also lässt man der Kreativität der Entwickler mal einen oder zwei Tage lang freien Lauf und guckt mal, was herauskommt. Die Veranstaltungen sind dann auch durchweg cool. Man kann die Energie spüren und die Entwickler haben sehr viel Spaß. Das alleine ist für viele Unternehmen bereits eine große Anstrengung und respektiere den Versuch sehr.

Probleme mit vielen Hackthons

Meist ist es aber eben auch nicht mehr als ein erster Versuch:

  • Es gibt keinen definierten Prozess, was mit den Ergebnissen passiert. Es gibt lediglich die diffuse Hoffnung, dass irgendjemand “da oben” schon die Genialität der Ergebnisse erkennen und dann die Projekte in die Roadmap aufnehmen wird. Passiert aber nicht.
  • Die Ergebnisse sind meist extrem technisch (wer wollte nicht schon immer mal einen Hadoop-Cluster aufsetzen oder Docker in Docker laufen lassen?) und der Mehrwert für das Unternehmen bleibt unklar. Auch daher verwundert es mich nicht wirklich, dass niemand die Ergebnisse zu echten Projekten macht.
  • Radikale erste Ideen sind so gut wie immer Mist. So werden dadurch zu guten oder sogar großartigen Ideen, dass sie über einen längeren Zeitraum durch verschiedene Köpfe gehen und mit anderen Ideen kombiniert werden (siehe [Sawyer2008]). Das leisten die Hackathons, die ich gesehen habe, kaum oder gar nicht.

So sind viele Hackathons nicht viel mehr als ein Strohfeuer, als kurz aufflammt, aber letztlich keine Wirkung hat. Es kann sogar zu Frustration führen, wenn die Teilnehmer diese Wirkungslosigkeit erkennen.

Siehe zu dem Thema auch meinen Blogpost zum Konzept der Slack-Time, in dessen Kommentaren Markus Andrezak das Konzept als “Opium für die Massen” bezeichnet hat. Das trifft auch auf so manchen Hackathon zu: Tut er mehr, als die Entwickler zu beruhigen?

Otto InnoDays

Sicherlich ist es bereits eine bemerkenswerte Leistung, wenn ein Konzern wie Otto einen Hackathon wie oben beschrieben umsetzt und damit zeigt, dass sowas nicht den neuen hippen Internet-Unternehmen vorbehalten ist.

Trotzdem blieb für mich die Frage, was ich dabei lernen könnte.

Die ersten Otto InnoDays fanden 2015 statt – damals nur mit den internen Mitarbeitern der Entwicklung. 2016 wurden die InnoDays geöffnet für externe Mitarbeiter und für die Fachabteilungen. Das hört sich schon mal ganz gut an, weil es die Chance bot, aus dem reinen Techno-Fokus herauszukommen. Außerdem waren die Otto InnoDays 2016 sehr groß angelegt – mit einem Prozess, der letztlich zwei Wochen lang lief. Und die Organisation machte auch nicht irgendjemand “mal nebenbei”. Stattdessen stellte Otto nennenswert Kapazitäten für die Vorbereitung und durch Begleitung bereit. Die schienen es wirklich ernst zu meinen.

Damit waren meine Zweifel noch nicht ausgeräumt. Aber die Neugier begann zu überwiegen und so sagte ich zu.

Ziele und Ablauf der InnoDays 2016

Details… (folgen noch)

Otto InnoDays 2016: die ganze Geschichte

Dieser Blogpost ist ein Artikel in einer Blogpost-Serie.

Überblick über die ganze Blogpost-Serie…


  • [Sawyer2008] Keith Sawyer: “Group Genius: The Creative Power of Collaboration”

Ich war bei den Otto InnoDays 2016 dabei. Auf Twitter konnte man die Veranstaltung unter dem Hashtag #innodays2016 verfolgen.

Dieser Blogpost ist der Auftakt einer kleinen Blogpost-Serie zu dem Thema. Ich habe mit Otto vereinbart, dass ich hier keine Lobhudelei betreibe und offen alles beschreiben darf. Ich darf insbesondere auch das benennen, was in meinen Augen besser gemacht werden kann (lediglich persönliche Beleidigungen darf ich hier nicht veröffentlichen – will ich auch gar nicht:-). Und mir ist Einiges aufgefallen, was besser gemacht werden kann. Das sollte aber nicht den Blick auf das trüben, was schon erreicht wurde. Im Vergleich mit ähnlichen Veranstaltungen in anderen Firmen waren die Otto InnoDays 2016 auf jeden Fall ganz vorne mit dabei. Und wenn ich dann bedenke, dass Otto ein Konzern mit über 4.000 Mitarbeitern und kein eBusiness-Unternehmen mit nur 200 Mitarbeitern ist, ziehe ich meinen Hut. Von den Otto InnoDays 2016 können sich viele vermeintlich agilere Unternehmen eine große Scheibe abschneiden.

Ich plane, in den nächsten Wochen die folgenden Themen in dieser Blogpost-Serie zu behandeln:

Teil 1: Warum ich zuerst skeptisch war und dann doch teilgenommen habe.

Als Otto in Person von Sabrina Hauptman mich fragte, ob ich teilnehmen wollte, war ich mir unsicher, wieviel mir das wirklich bringt. Schließlich hatte ich schon ähnliche Veranstaltungen besucht: Hackathon, FedEx Day und wie sie alle heißen. Was sollte hier jetzt so besonders interessant sein – außer dass das Format jetzt auch von Konzernen praktiziert wird?


Teil 2: Die Ziele und der Ablauf

Die InnoDays sollten zum Querdenken anregen und auch Platz für disruptive Ideen bieten. Die InnoDays erstreckten sich insgesamt über zwei Wochen. Es begann mit einer Aufwärmphase, gefolgt von der Ideengenerierung. Die Ideen wurden dann gefiltert (Abstimmung mit den Füßen) und die selektierten Ideen prototypisch umgesetzt. Die Ergebnisse wurden zum Abschluss vorgestellt und von einer Jury bewertet. Außerdem gibt es einen definierten Prozess, wie die „Sieger“ in den Roadmap-Planungsprozess eingespielt werden.


Teil 3: Der Kulturwandel

Der Otto-Konzern will sich modernisieren und fit für das digitale Zeitalter werden. Dazu sind im Bereich eCommerce in den letzten Jahren beachtliche Erfolge erzielt werden. Skaliertes Scrum mit mit einer auf Verticals basierenden System-Architektur stellt die technische Basis bereit, auf der Business Agility (um mal ein Buzz-Word zu benutzen) möglich wird.

Die InnoDays sind ein weiterer Schritt in Richtung einer auf agilen Werten beruhenden Unternehmenskultur. Bei den InnoDays habe ich erlebt, wo diese Schritte bereits mutig gegangen werden, aber auch wo alte Strukturen immer wieder hervorbrechen.


Teil 4: Die Projekte

Es wurden insgesamt 18 Projekte durchgeführt. Ich habe mir diese inhaltlich angesehen und nach verschiedenen Kriterien klassifiziert. Wieviele disruptive Ideen waren bei den Projekten wirklich dabei? Wieviele Projekte basierten auf spinnerten Ideen, die sich am Ende nicht umsetzen lassen würden? Welche Projekte sprechen neue Zielgruppen an und welche optimieren „nur“ den existierenden Service für die existierenden Kunden? Und woran liegt es, dass es so gekommen ist, wie es gekommen ist?


Teil 5: Die großen Filter

Bei den InnoDays wurden Ideen bzw. Projekte mehrfach gefiltert. Zuerst wurden die Ideen durch Abstimmung mit den Füßen gefiltert. Bei der Abschluss-Präsentation wurden die Sieger herausgefiltert. Diese gehen in einen Roadmap-Planungsprozess, in dem erneut gefiltert wird.

Dieses Vorgehen entspricht dem Stand der Kunst für solche Veranstaltungen. Allerdings ist der Ansatz aus meiner Sicht noch nicht optimal. Radikale Ideen sind in ihrer ersten Fassung fast immer Mist. Sie müssen durch verschiedene Köpfe wandern, dort verändert und mit anderen Ideen kombiniert werden. Dann können wirklich großartige Dinge entstehen. Der große Filter-Ansatz verhindert diesen Prozess und tendiert daher dazu, mittelmäßige Ideen Realität werden zu lassen. Das ist schon mal nicht so schlecht, weil immerhin die schlechten Ideen ausgesiebt werden. Es geht aber viel besser: Diverge & Merge ist leistungsfähiger als Diverge & Filter.


Teil 6: Ein paar Thesen

Aus den InnoDays können Otto und andere Unternehmen viel lernen – auch über die internen Prozesse und Strukturen. Ich habe dazu ein paar Thesen zusammengeschrieben. Vielleicht helfen diese, in der Zukumft noch bessere InnoDays zu gestalten.


Teil 7: Zusammenfassung

Think, Create, Learn: das war das Motto das Otto InnoDays 2016. Wie stellt sich die ganze Veranstaltung rückblickend vor dem Hintergrund dieses Mottos dar?




Tagged: Innovation, Scrum
In this post, I will explore what makes a good Development Team in Scrum from an organizational perspective.

Great teams

When it comes to structuring the Development Team, the Scrum Guide tells us that the team is “cross-functional, with all of the skills as a team necessary to create a product increment” and that the team should be structured to “organize and manage their own work”. Finally, we learn that the team should have between 3 and 9 developers, to be able to contain all the skills required while not increasing management complexity beyond the level self-organization can casually deal with.

Over the past few years, I have helped some clients to improve their teams’ setup and structure, and found that there are more criteria that the Scrum Guide only hints at but never mentioned explicitly.

Have few dependencies

While Scrum expects the team to be able to provide the product increment on their own, corporate reality often sees products with a scope much larger than what a single team can produce in reasonable time.
You may have read about scaling approaches like LeSS, and their call for feature teams. Rare is the product, though, where splitting into features is possible the moment your start with Scrum.

To alleviate the situation, I look for a setup where inter-team dependencies are as few as possible. In an ideal case, this means that the team has one team to receive input from and one more team to deliver output to.
For work to flow, the team has to be independent from other teams. Define your teams so that their scope covers a large, uninterrupted piece of the value chain, and integrate technical concerns as much as possible.
That way, you will reduce the number of handovers both in planning and in the actual work done.

Stay together

In a corporate environment, projects start and stop all the time. Project managers vie for resources, eyes on the deadline, not on the social effects. For team members, this means that today’s colleague may be gone tomorrow – so each of them looks out for their own piece of work first and foremost.
When introducing Scrum in such an environment, I emphasize the twin values of stability and reliability.
It takes a stable environment for employees to benefit from investing in the team’s success rather than their own achievements. Team members learn to trust each other over time and start to co-operate. In a stable teams, members can learn each other’s abilities and preferences – and that’s what it takes to get better.
Reliability, meanwhile, means that team members are on the team, period. Splitting people between projects eats up time beyond the numbers in your spreadsheet, as it introduces interruptions in the form of urgent requests from outside projects.
Now, any corporation worth the name will have a complex resource distribution in place already, and you will be hard pressed to change it all at once. What to do?
Allot people to the product as much as possible, and encourage them to block fixed slots of time for working on the product. If necessary, suggest they turn off phones, chat and mail clients; empower them to postpone dealing with incoming requests till team time is over.
That way, their team can learn rely on them, and teamwork improves.

Learn from each other

Remember what the Scrum Guide said? “Teams organize and manage their own work.”
It takes a certain level of experience to do that, and I have frequently heard managers or Scrum Masters complain that their team is not yet ready for this level of freedom – and next coordinate the work themselves, trampling the sprouts of self-organization.
So, it takes experience for the team to self-organize, and it takes laissez-faire.
The best teams I worked with did not consist of veterans only, though, since less experienced team members bring two crucial factors to the team: By embracing their curiosity, they challenge long-standing practices; by working with several of their more senior colleagues, they foster communication.
So, when building a team, I look for a good mix of old hands and new to make learning happen and self-organization possible, while still having all the skills to actually build the product.

Make work flow

This concludes my thoughts about criteria to watch out for when building great teams:
reliabilty, stability, experience, cross-functionality and independence.

So, when you are next thinking about how to improve your team, just think “Teams R SEXI”, and you are well on your way:

Experienced adequately
X-functional and

Mind, though, that these criteria are strongly skewed towards the organizational perspective and pay no heed to the more mushy, social criteria that are well worth considering as well.

What criteria do you apply to find your teams? Did you go at it from an entirely different angle? 
I am looking forward to your comments.
In this post, I explore the Scrum value of focus, the notion of focus time and its implications for your Scrum team’s setup.

Focus as a Scrum value

Focus. It is one of the core values of Scrum, reminding us to be precise about our development goals and to stay on target at all times.
Focus tells Product Owners to build one product for a well defined audience, tells Scrum Masters to help their teams improve in one area at a time, and tells the Development Team to go for the Sprint goal and only the Sprint goal.
It is them - the team - I want to focus on in this post, although the thoughts apply to the other roles as well.

Focus time

Recently, a client introduced me to the notion of focus time. By his definition, focus time is the part of a Sprint that the team actually spends working on product backlog items directly contributing to the Sprint goal.
Thus, focus time is comprised of all the work required to get one specific product backlog item from “ready” to “done”, including design, implementation, testing, documentation and posing the odd question to the Product Owner or relevant stakeholders.
It is not: Helping to prepare the next Sprint, demonstrating the product increment in the review, improving the process in the retrospective or organizing the team in the Daily Scrum.1 While all of these activities are important, they are by definition off-focus, as they do not directly improve the product2.

Perfect world

In an ideal setup, the team spends most of their time focused. The Scrum Guide allows for up to 10% of the development team’s time to be taken for product backlog refinement3, while planning, review and retrospective take up 10% of the team’s time yet again.
Another 5% are eaten up by Daily Scrums and general communication with people in- and outside of our product: Even the best of bosses and stakeholders need some attention from time to time.

This leaves our ideal development team with
100% Sprint time
- 10% refinement
- 10% planning, review, retrospective
- 5% Daily Scrum and communication
=  75% focus time

That’s 7½ workdays out of your two week sprint! Contrast that with the common complaint that Scrum is all talking and keeps people from getting things done.

Half measures

Rare, however, is the team where all team member contribute all day, every day. The larger an organization, the larger its tendency to split people’s time and attention over several projects or products.
25 years ago, in 'Quality Software Management: Systems Thinking', Gerald Weinberg introduced us to the hidden cost of context switching, claiming that an even split across two projects left a knowledge worker with only 40% of his or her capacity to apply to each of them. (While Weinberg's original text is not available for free, his idea has spread beyond the cover of his book.)

So, let’s split some people and look at their focus time:
50% allotted time
- 10% context switching
- 5% refinement
- 10% planning, review and retrospective
-  5% Daily Scrum and communication
= 20% focus time

In case you’re wondering, I left the numbers for the four main Scrum rituals at their original value since all teams I have worked with so far insisted that their split members take part in all of them, since this is where the team makes all major decisions.
So, out of a promised 50% of somebody’s time and attention, only 20% contribute directly to the goals we set for the product – that’s hardly more than half of the 37.5% we might expect when looking at the original calculation.
Now, suddenly, Scrum is all talk and no action.

Said and done

So, the idea of focus – concentrating on what we ought to do – lead us to the notion of focus time, the time spent directly contributing to the Sprint goal. We have seen that ¾ of a team’s time could be spent focused, and that this number shrinks drastically when we split people’s time across products.

Now, I’d like to invite you to examine the time your team spends focused.
Do you get close to ideal numbers? Are you in the 20% range, even though everyone is on the project 100% of their time? Where does the time people don’t spend focused go and which of these activities are really necessary?
All in all: What could you do to improve the numbers?

Besides looking at things that distract the team, you might want to look at the product backlog items your team and Product Owner agree upon for the sprint. Is there a common theme, so that most or all of them contribute to a single goal?
If they have little in common, one could argue, there is not really one Sprint goal, and the team could never reach the lofty numbers I laid out above. However, there could be leverage in the way the Product Owner prioritizes the product backlog or the way you plan your sprints.

Let’s talk about your numbers and your findings in the comments!


Writing this post brought to my mind a number of questions, among them:
“What keeps teams from focussing and how to get back there quickly?”, “How to show the value of Scrum’s rituals to the ‘rather do than talk’ faction?”, and “Is focus time a good measure for the quality of a Scrum process? What other metrics are there?”

Comment below if you have thoughts on any of these questions or are particularly interested in one of them.

1 Neither is it taking part in communities of practice, performance reviews, answering mails, general discussion or undirected learning, along with most other things team members like to do.

2 Actually, this is the main reason that some developers tend to resent them. Hackers want to do, not talk. As a Scrum Master, you could do worse than to think about this before your colleagues bring it up.

3 Note that this is not the established practice of a “refinement meeting”, but rather includes all activities dealing with the process of refining the product backlog.

Letzte Aktualisierung des Planeten:
03:05 UTC