Wednesday, December 13, 2017

Cambridge Lean Coffee


This month's Lean Coffee was hosted by us at Linguamatics. Here's some brief, aggregated comments and questions on topics covered by the group I was in.

Performance testing

  • We have stress tests that take ages to run because they are testing a long time-out
  • ... but we could test that functionality with a debug-only parameter.
  • Should we do it that way, or only with production, user-visible functionality?
  • It depends on the intent of the test, the risk you want to take, the value you want to extract.
  • Do both? Maybe do the long-running one less often?

Driving change in a new company

  • When you join a new company and see things you'd like to change, how do you do it without treading on anyone's toes?
  • How about when some of the changes you want to make are in teams you have no access to, on other sites?
  • Should I just get my head down and wait for a couple of years until I understand more?
  • Try to develop face-to-face relationships.
  • Find the key players.
  • Build a consensus over time; exercise patience.
  • Make changes incrementally so you don't alienate anyone.
  • If you wait you'll waste good ideas.
  • Don't be shy!
  • There's no monopoly on good ideas.
  • Can you do a proof-of-concept?
  • Can you just squeeze a change in?
  • You don't want to be a distraction, so take care.
  • Organise a show-and-tell to put your ideas out there.
  • Give feedback to other teams.
  • Attend other team's meetings to see what's bothering them.
  • Get some allies. Find people who agree with you.
  • Find someone to give you historical context
  • ... some of your ideas may have been had, or even tried and failed.
  • As a manager, I want you to make some kind of business case to me
  • ... what problem to do you see; who does it affect; how; what solution do you propose; what pros and cons does it have; what cost/benefit?
  • Smaller changes will likely be approved more easily.
  • Find small wins to build credibility.

When did theory win over practice?

  • I've been reading a performance testing book which has given me ideas I can take into work on Monday and implement.
  • I've been reading TDD with Python and it's changed how I write code
  • ... and reinvigorated my interest in the testing pyramid.
  • Rapid Software Testing provided me with structure around exploratory testing.
  • ... I now spend my time in the software, not in planning.
  • Sometimes theory hinders practice; I found that some tools recommended by RST just got in my way.
  • I heard about mindmaps for planning testing at a conference.
  • I've been influenced by Jerry Weinberg. His rule of three and definition of a problem help me step back and consider angles
  • ... the theory directly influences the practice.

How many testers is the right number?

  • That's a loaded question.
  • It depends!
  • The quality of the code matters; better code will need less testing
  • ... but could the development team do more testing of their own?
  • How do you know what the quality of the code is, in order to put the right number of testers on it?
  • Previous experience; how many bugs were found in the past.
  • But the number of bugs found is a function of how hard you look.
  • Or how easy they are to find.
  • Or what kinds of bugs you care to raise.
  • You need enough testers to get the right quality out at the end (whenever that is).
  • Our customers are our testers.
  • Our internal customers are our testers.
  • We have no testers
  • ... we have very high expectations of our unit tests
  • ... and our internal customers are very good at giving feedback
  • ... in fact, our product provides a reporting interface for them.
  • Microservices don't need so many testers, but perhaps the developers would benefit from a test coach.
  • If the customers are happy, do you need to do much testing?
  • Customers will work around issues without telling you about it.
  • It's helpful to have a culture of reporting issues inside the company.
  • I see a lot of holes in process as well as software.
  • You don't need any testers if everyone is a tester.

Sunday, December 3, 2017

Compare Testing


If you believe that testing is inherently about information then you might enjoy Edward Tufte's take on that term:
Information consists of differences that make a difference.
We identify differences by comparison, something that as a working tester you'll be familiar with. I bet you ask a classic testing question of someone, including yourself, on a regular basis:
  • Our competitor's software is fast. Fast ... compared to what?
  • We must export to a good range of image formats. Good ... compared to what?
  • The layout must be clean. Clean ... compared to what?
But while comparison as a tool to get clarification by conversation is important, for me, it feels like testing is more fundamentally about comparisons.

James Bach has said "all tests must include an oracle of some kind or else you would call it just a tour rather than a test." An oracle is a tool that can help to determine whether something is a problem. And how is the value extracted from an oracle? By comparison with observation!

But we've learned to be wary of treating an oracle as an all-knowing arbiter of rightness. Having something to compare with should not lure you into this appealing trap:
I see X, the oracle says Y. Ha ha! Expect a bug report, developer!
Comparison is a two-way street and driving in the other direction can take you to interesting places:
I see X, the oracle says Y. Ho hum. I wonder whether this is a reasonable oracle for this situation?
Cem Kaner has written sceptically about the idea that the engine of testing is comparison to an oracle:
As far as I know, there is no empirical research to support the claim that testers in fact always rely on comparisons to expectations ... That assertion does not match my subjective impression of what happens in my head when I test. It seems to me that misbehaviors often strike me as obvious without any reference to an alternative expectation. One could counter this by saying that the comparison is implicit (unconscious) and maybe it is. But there is no empirical evidence of this, and until there is, I get to group the assertion with Santa Claus and the Tooth Fairy. Interesting, useful, but not necessarily true.
While I don't have any research to point to either, and Kaner's position is a reasonable one, my intuition here doesn't match his. (Though I do enjoy how Kaner tests the claim that testing is about comparisons by comparing it to his own experience.) Where we're perhaps closer is in the perspective that not all comparisons in testing are between the system under test and an oracle with a view to determine whether the system behaviour is acceptable.

Comparing oracles to each other might be one example. And why might we do that? As Elaine Weyuker suggests in On Testing Non-testable Programs, partial oracles (oracles that are known to be incomplete or unreliable in some way) are common. To compare oracles we might gather data from each of them; inspect it; look for ways in which each has utility (such as which has more predictive power in scenarios of interest).

And there we are again! The "more" in "which has more predictive power" is relative, it's telling us that we are comparing and, in fact, here we're using comparisons to make a decision about which comparisons might be useful in our testing. I find that testing is frequently non-linear like that.

Another way in which comparison is at the very heart of testing is during exploration. Making changes (e.g. to product, data, environment, ...) and seeing what happens as a result is a comparison task. Comparing two states separated by a (believed) known set of actions irrespective of whether you have an idea about what to expect is one way of building up knowledge and intuition about the system under test, and of helping to decide what to try next, what to save for later, what looks uninteresting (for now).

Again this throws up meta tasks: how to know which aspects of a system's state to compare? How to know which variables it is even possible to compare? How to access the state of those at the right frequency and granularity to make them usable? And again there's a potential cycle: gather data on what it might be possible to compare; inspect those possibilities; find ways in which they might have utility.

I started here with a Tufte quote about information being differences that make a difference, and said that identifying the differences is an act of comparison. I didn't say at that point but identifying the ones that make a difference is also a comparison task. And the same skills and tools that can be used for one can be used for both: testing skills and tools.
Image: https://flic.kr/p/q8zmqn

Thursday, November 23, 2017

Six & Bugs & Joke & Droll


Hiccupps just turned six years old. Happy birthday to us. And thank you for reading; I hope you're getting something out of it still.

Unwittingly I've stumbled into a tradition of reflecting on the previous 12 months and picking out a few posts that I liked above the others for some reason. Here's this year's selection:

  • What We Found Not Looking for Bugs: a headrush conversation with Anders Dinsen on the nature and timing of testing 
  • The Dots: a headrush conversation with myself on the connections between the connected things 
  • Fix Up, Look Sharp: a headrush reading experience from Ron Jeffries' Extreme Programming Adventures in C# 
  • Quality != Quality: a headrush of being picked up by Hacker News, my page views going nuts, and developers debating quality 
  • A (Definition of) Testing Story: a headrush last-minute conference proposal accepted at UKSTAR 2018 

And in the meantime my mission to keep my testing mind limber with rule-of-three punning continues too. Check 'em out on Twitter. Join in!

(And apologies to Ian Dury.)

Saturday, November 18, 2017

Don't Knock It


They were chuckling at me when I came back from the kitchen next to the meeting room. They were grinning and smirking at each other because they'd heard me laugh out loud and knew that I was the only person in there.

So I felt compelled to explain that I was laughing because I value highly in testers the ability to find more than one way to look at any given situation. Stated drily like that, it  doesn't sound worthy of a solo guffaw does it? But what I actually said went a bit like this ...

You know that scene in The Lord Of The Rings where they're trying to get into a mine? There's a clue phrase in Elvish above the door that Gandalf translates as "Speak, friend, and enter" but then he can't remember what the password is. Eventually he sees an alternative interpretation, "Say friend and enter", and they get in.

Well, I was in the kitchen looking at the door to the car park and there's a sticker on it which I'm sure I must've read before ...



... but this time I thought is that door calling me a knob?

Thursday, November 16, 2017

Respond to the Context


Sometimes a phrase just lights up the room when it's spoken.

I encountered one today. One of my team was debriefing us, giving her analysis of our answers to her survey of our experiences of the team pairing experiment that she ran.

I say it lit up the room, but really for me it was writ large in fireworks, sounding a fanfare, and flying loop-the-loops. Here it is:
Respond to the context.
I'll just leave it there for you. And also this.
Image: https://flic.kr/p/UWL4d5

Wednesday, November 8, 2017

NoSQL for Us


Unfortunately, last night's Cambridge Tester Meetup talk about database unit testing was cancelled due to speaker illness. No problem! We had Lean Coffee instead. Here's a few aggregated comments and questions from the group discussion.

How do you deal with internal conflicts?

  • Give overt, verbal appreciation to the other person and their perspective.
  • Be humble.
  • Leave your ego behind.
  • Conflict is healthier than the alternative. 
  • Conflict betrays a lack of common understanding.
  • I seek conflict.
  • Conflict of personality or of ideas?
  • I want to squeeze out ambiguity and lack of clarity.
  • A stand-up row can be acceptable to achieve that. (Even if it isn't the first thing I'll try.)
  • Some people avoid conflict because they feel they won't win the argument.
  • What is the source of the conflict? That makes a difference.
  • Try to keep discussion to facts; objective not subjective; technical not personal.
  • Try to get to know each other as people.
  • Try to build team spirit.
  • Change your language for different people.
  • Make yourself likeable.
  • Be assertive. That is, be calm, direct and equal.


What does Agile mean to you?

  • The Agile Manifesto is about software engineering and not about other processes.
  • Agile is a good term for marketing to upper management.
  • Extreme Programming is not a good term for marketing to upper management.
  • Agile is for projects where we don't know what we want.
  • It's for when we want to do the right thing but don't know how.
  • It's about early feedback.
  • It's about collaboration.
  • It's about being responsive.
  • Anything-by-the-book is never good.
  • "Painting by numbers doesn't teach you how to paint".
  • Most teams have 30% of their members who don't know what they're doing.
  • I'm a fan of Agile but not a fan of Scrum.
  • Teams at my work mostly use Kanban.
  • It's about knowing things will change and not going overboard on planning.

TDD Difficulty

  • So many people talk about TDD but why is it so hard to get it into use?
  • I like it and my boss likes it, but in five years we've never moved to it.
  • Why?
  • Perhaps it's too big a change for our team.
  • Perhaps no-one wants to make the effort to change it.
  • BDD is a better approach.
  • Is TDD better as personal preference than mandated practice?
  • It only matters that there are tests at the end.
  • Has anyone tried to measure the pros/cons of doing it?
  • Some people think TDD is an overhead; work without benefit.
  • TDD is about design rather than tests.
  • Is TDD really about capturing intent?

How are you using Docker in Testing?

  • To avoid having to deal with dependencies.
  • For installation testing; it's easy to get a known, repeatable environment.
  • Interested in trying to containerise test cases so that we can give something to developers to just run to see an issue.
  • Virtual machines are an unnecessary overhead much of the time.
  • Docker makes it easier to exploit all of the CPU on a host.
  • Docker is no help for kernel development and testing (if you need to use variant kernels.)
  • My team haven't found a use for it.

Wednesday, November 1, 2017

A (Definition of) Testing Story

I'm speaking on the Storytelling track at UKSTAR 2018. In eight minutes I'm going to explain how I arrived at my own definition of testing, why I bothered, and what I do with it now I've got it. 

You can find some of the background in these posts:
and I made a kind of sketchnote video thing to promote it too:


If you still want to come after all that, get 10% off, on me:

See you there I hope.