Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Because Reading is Fundamental

Coding Horror - Jeff Atwood - Wed, 11/26/2014 - 02:21

Most discussions show a bit of information next to each user:

What message does this send?

  • The only number you can control printed next to your name is post count.
  • Everyone who reads this will see your current post count.
  • The more you post, the bigger that number next to your name gets.

If I have learned anything from the Internet, it is this: be very, very careful when you put a number next to someone's name. Because people will do whatever it takes to make that number go up.

If you don't think deeply about exactly what you're encouraging, why you're encouraging it, and all the things that may happen as a result, you may end up with … something darker. A lot darker.

Printing a post count number next to every user's name implies that the more you post, the better things are. The more you talk, the better the conversations are. Is this the right message to send to everyone in a discussion? More fundamentally, is this even true?

I find that the value of conversations has little to do with how much people are talking. I find that too much talking has a negative effect on conversations. Nobody has time to listen to the resulting massive stream of conversation, they end up just waiting for their turn to pile on and talk, too. The best conversations are with people who spend most of their time listening. The number of times you've posted in a given topic is not a leaderboard; it's a record of failing to communicate.

Consider the difference between a chat room and a discussion. One is a never-ending flow of disconnected, stream of consciousness sentences that you can occasionally dip your toes into to get the temperature of the water, and that's about it. The other is a process of back and forth paragraphs that ideally result in an evolution of positions as your mutual understanding becomes more nuanced.

The Ars Banana Experiment

Ars Technica ran a little experiment in 2011. When they posted Guns at home more likely to be used stupidly than in self defense, embedded in the last sentence of the seventh paragraph of the article was this text:

If you have read this far, please mention Bananas in your comment below. We're pretty sure 90% of the respondants to this story won't even read it first.

The first person to do this is on page 3 of the resulting discussion, comment number 93. Or as helpfully visualized by Brandon Gorrell:

Plenty of talking, but how many people actually read up to paragraph 7 (of 11) of the source article before they rushed to comment on it?

The Slate Experiment

In You Won't Finish This Article, Farhad Manjoo dares us to read to the end.

Only a small number of you are reading all the way through articles on the Web. I’ve long suspected this, because so many smart-alecks jump in to the comments to make points that get mentioned later in the piece.

But most of us won't.

He collected a bunch of analytics data based on real usage to prove his point:

These experiments demonstrate that we don't need to incentivize talking. There's far too much talking already. We badly need to incentivize listening.

And online, listening = reading. That old school program from my childhood was right, so deeply fundamentally right. Reading. Reading Is Fundamental.

Let's say you're interested in World War II. Who would you rather have a discussion with about that? The guy who just skimmed the Wikipedia article, or the gal who read the entirety of The Rise and Fall of the Third Reich?

This emphasis on talking and post count also unnecessarily penalizes lurkers. If you've posted five times in the last 10 years, but you've read every single thing your community has ever written, I can guarantee that you, Mr. or Mrs. Lurker, are a far more important part of that community's culture and social norms than someone who posted 100 times in the last two weeks. Value to a community should be measured every bit by how much you've read as much as how much you talked.

So how do we encourage reading, exactly?

You could do crazy stuff like require commenters to enter some fact from the article, or pass a basic quiz about what the article contained, before allowing them to comment on that article. On some sites, I think this would result in a huge improvement in the quality of the comments. It'd add friction to talking, which isn't necessarily a bad thing, but it's a negative, indirect way of forcing reading by denying talking. Not ideal.

I have some better ideas.

  1. Remove interruptions to reading, primarily pagination.

    Here's a radical idea: when you get to the bottom of the page, load the next damn page automatically. Isn't that the most natural thing to want when you reach the end of the page, read the next one? Is there any time that you've ever been on the Internet reading an article, reached the bottom of page 1, and didn't want to continue reading? Pagination is nothing more than an arbitrary barrier to reading, and it needs to die a horrible death.

    There are sites that go even further here, such as The Daily Beast, which actually loads the next article when you reach the end of the one you are currently reading. Try it out and see what you think. I don't know that I'd go that far (I like to pick the next thing I read, thanks very much), but it's interesting.

  2. Measure read times and display them.

    What I do not measure, I cannot display as a number next to someone's name, and cannot properly encourage. In Discourse we measure how long each post has been visible in the browser for every (registered) user who encounters that post. Read time is a key metric we use to determine who we trust. If you aren't willing to visit a number of topics and spend time actually listening to us, why should we talk to you – or trust you.

    Forget clicks, forget page loads, measure read time! We've been measuring read times extensively since launch in 2013 and it turns out we're in good company: Medium and Upworthy both recently acknowledged the intrinsic power of this metric.

  3. Give rewards for reading.

    I know, that old saw, gamification, but if you're going to reward someome, do it for the right things and the right reasons. For example, we created a badge for reading to the end of a long 100+ post topic. And our trust levels are based heavily on how often people are returning and how much they are reading, and virtually not at all on how much they post.

    To feel live reading rewards in action, try this classic New York Times Article. There's even a badge for reading half the article!

  4. Update in real time.

    Online we tend to read these conversations as they're being written, as people are engaging in live conversations. So if new content arrives, figure out a way to dynamically rez it in without interrupting people's read position. Preserve the back and forth, real time dynamic of an actual conversation. Show votes and kudos and likes as they arrive. If someone edits their post, bring that in too. All of this goes a long way toward making a stuffy old debate feel like a living, evolving thing versus a long distance email correspondence.

These are strategies I pursued with Discourse, because I believe Reading Is Fundamental. Not just in grade school, but in your life, in my life, in every aspect of online community. To the extent that Discourse can help people learn to be better listeners and better readers – not just more talkative – we are succeeding.

If you want to become a true radical, if you want to have deeper insights and better conversations, spend less time talking and more reading.

[advertisement] Stack Overflow Careers matches the best developers (you!) with the best employers. You can search our job listings or create a profile and even let employers find you.
Categories: Programming

R: dplyr – Select ‘random’ rows from a data frame

Mark Needham - Wed, 11/26/2014 - 01:01

Frequently I find myself wanting to take a sample of the rows in a data frame where just taking the head isn’t enough.

Let’s say we start with the following data frame:

data = data.frame(
    letter = sample(LETTERS, 50000, replace = TRUE),
    number = sample (1:10, 50000, replace = TRUE)
    )

And we’d like to sample 10 rows to see what it contains. We’ll start by generating 10 random numbers to represent row numbers using the runif function:

> randomRows = sample(1:length(data[,1]), 10, replace=T)
> randomRows
 [1]  8723 18772  4964 36134 27467 31890 16313 12841 49214 15621

We can then pass that list of row numbers into dplyr’s slice function like so:

> data %>% slice(randomRows)
   letter number
1       Z      4
2       F      1
3       Y      6
4       R      6
5       Y      4
6       V     10
7       R      6
8       D      6
9       J      7
10      E      2

If we’re using that code throughout our code then we might want to pull out a function like so:

pickRandomRows = function(df, numberOfRows = 10) {
  df %>% slice(runif(numberOfRows,0, length(df[,1])))
}

And then call it like so:

> data %>% pickRandomRows()
   letter number
1       W      5
2       Y      3
3       E      6
4       Q      8
5       M      9
6       H      9
7       E     10
8       T      2
9       I      5
10      V      4
 
> data %>% pickRandomRows(7)
  letter number
1      V      7
2      N      4
3      W      1
4      N      8
5      G      7
6      V      1
7      N      7
Categories: Programming

Finding Cause and Effect with Ishikawa Diagrams

Untitled

Ishikawa diagrams, also known as fishbone diagrams, are a mechanism for generating and mining data to discover the cause and effect of an issue or observation. The technique which was created by Kaoru Ishikawa (1968, University of Tokyo) to show the causes of a specific event. The technique has been adopted and used in many quality and analytical processes such as SAFe’s Inspect and Adapt process. Ishikawa diagrams can also be combined with other techniques such as the “five whys.”

The tools:

  1. Whiteboard/flipchart and dry erase markers

The process for generating an Ishikawa diagram:

  1. Draw a line through the center of the whiteboard or flipchart.
  2. Identify the problem (effect) that will be investigated. Write the problem statement at the far right end of the centerline. A problem statement reflects the issue or idea that is being investigated.
  3. Decide on the major categories of problem’s cause. There are a number of standard categories that vary by industry or problem. For example, the categories that are often used for software projects are people, process, tools, project and environment. There is no hard and fast rules for the number and types of categories; let the problem scenario guide you.
  4. Draw a line for each of categories radiating from the center line. Each line should be at approximately a 60 degree angle from the center line. Distribute the category lines around the center line to approximate the bones extending from a fish spine (hence the name – fishbone diagram).
  5. Brainstorm to identify potential causes of the problem.
  6. Write each cause on a line branching from the related category. Break each cause down into the causes that cause the causes, adding additional branches as needed. Note: a cause can be listed in multiple categories.
  7. As the chart emerges, the facilitator should focus the team’s brainstorming efforts on areas of the chart that have the least detail.
  8. When the team loses focus, the exercise is complete. If the diagraming session is longer than 30 minutes consider taking a break to help keep the team fresh.

The process of generating an Ishikawa diagram focuses a team on defining the causes of a specific effect. The process is often used where identifying potential process improvements is the goal, such as retrospectives. The formality of the Ishikawa diagraming process keeps teams focused on a specific goal.


Categories: Process Management

Sponsored Post: Apple, Asana, Hypertable, Sprout Social, Scalyr, FoundationDB, AiScaler, Aerospike, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Apple has multiple openings. Changing the world is all in a day's work at Apple. Imagine what you could do here. 
    • Sr. Software Engineer-iOS Systems. Do you love building highly scalable, distributed web applications? Does the idea of performance tuning Java applications make your heart leap? Would you like to work in a fast-paced environment where your technical abilities will be challenged on a day to day basis? Do you want your work to make a difference in the lives of millions of people? Please apply here.
    • Apple Pay - Site Reliability Engineer. You already know this… every issue counts. A single ticket can be the key to discovering an issue impacting thousands of people. And now that you’ve found it, you can’t wait to fix it. You also know that owning the quality of an application is about separating the signal from the noise. Finding that signal is what motives you. Now that you’ve found it, you’re next step is to role up the sleeves and start coding. As a member of the Apple Pay SRE team, you’re expected to not just find the issues, but to write code and fix them. Please apply here.
    • Senior Software Engineer -iOS Systems. This role demands the best and brightest engineers. The ideal candidate will be well rounded and offer a diverse skill set that aligns with key qualifications. Practical experience integrating with a diverse set of third-party APIs will also serve to distinguish you from other candidates. This is a highly cross functional role, and the typical team member's day to day responsibilities on the Carrier Services team. Please apply here
  • Aerospike is hiring! Join the innovative team behind the world's fastest flash-optimized in-memory NoSQL database. Currently hiring for positions in our Mountain View, Calif., and Bangalore offices. Apply now! http://www.aerospike.com/careers

  • As a production-focused infrastructure engineer at Asana, you’ll be the person who takes the lead on setting and achieving our stability and uptime goals, architecting the production stack, defining the on-call experience, the build process, cluster management, monitoring and alerting. Please apply here.

  • Performance and Scale EngineerSprout Social, will be like a physical trainer for the Sprout social media management platform: you will evaluate and make improvements to keep our large, diverse tech stack happy, healthy, and, most importantly, fast. You'll work up and down our back-end stack - from our RESTful API through to our myriad data systems and into the Java services and Hadoop clusters that feed them - searching for SPOFs, performance issues, and places where we can shore things up. Apply here.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • Sign Up for New Aerospike Training Courses.  Aerospike now offers two certified training courses; Aerospike for Developers and Aerospike for Administrators & Operators, to help you get the most out of your deployment.  Find a training course near you. http://www.aerospike.com/aerospike-training/
Cool Products and Services
  • Hypertable Inc. Announces New UpTime Support Subscription Packages. The developer of Hypertable, an open-source, high-performance, massively scalable database, announces three new UpTime support subscription packages – Premium 24/7, Enterprise 24/7 and Basic. 24/7/365 support packages start at just $1995 per month for a ten node cluster -- $49.95 per machine, per month thereafter. For more information visit us on the Web at http://www.hypertable.com/. Connect with Hypertable: @hypertable--Blog.

  • FoundationDB launches SQL Layer. SQL Layer is an ANSI SQL engine that stores its data in the FoundationDB Key-Value Store, inheriting its exceptional properties like automatic fault tolerance and scalability. It is best suited for operational (OLTP) applications with high concurrency. Users of the Key Value store will have free access to SQL Layer. SQL Layer is also open source, you can get started with it on GitHub as well.

  • Diagnose server issues from a single tab. The Scalyr log management tool replaces all your monitoring and analysis services with one, so you can pinpoint and resolve issues without juggling multiple tools and tabs. It's a universal tool for visibility into your production systems. Log aggregation, server metrics, monitoring, alerting, dashboards, and more. Not just “hosted grep” or “hosted graphs,” but enterprise-grade functionality with sane pricing and insane performance. Trusted by in-the-know companies like Codecademy – try it free!

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Cloud deployable. Free instant trial, no sign-up required.  http://aiscaler.com/

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

Constructing a Credible Estimate

Herding Cats - Glen Alleman - Tue, 11/25/2014 - 17:56

To build a credible estimate for any project, in any domain, to produce a solution to any problem, we need to start with a few core ideas.

  • Gather historical data.
    • Unless you're inventing new physics, it is very unlikely what you want to do hasn't been done already, somewhere by someone.
    • We hear all the time this project is unique. Really?
    • This has NEVER been done before?
    • There is no reference design for what we want to do?
    • We are actually inventing the solution out of whole cloth?
  • Gather information about this specific project.
    • This doesn't mean full detailed requirements. That's just not going to happen on any real project.
    • Gather needed Capabilities. Follow the Capabilities Based Planning advice.
    • Sort these capabilities using what ever method you want, but sort them in some priority so the Analysis of Alternatives can be performed.
    • Capabilities are not requirements. Capabilities state what you'll be doing with the results of the project and how what you'll be doing will produce the planned value from the project.
  • Break out some statistical tools - excel will work
    • Does the historical have any statistically confident that it represents the actual past performance
    • I see all the time 20 samples of stories that have ±50% variances over the period of performance. The Average is then used. Don't do this. 
      • First the Most Likely is the number you want. That's the Mode, the most recurring value, of all the numbers you have.
      • Next read The Flaw of Averages on how you can be fooled by averages
  • Fianally to produce a credible estimate, you'll need:
    • Experience
    • Skills
    • Knowledge
    • Data
    • Tools
    • People
    • Process

Screen Shot 2014-11-25 at 3.56.13 PM

If you're missing any of the items in this picture, it's going to be a disappointing effort. Some may even call it a waste to estimate. But not for the reasons you think. It  is a waste to estimate if you don't know how to estimate. But estimate are not for you, unless you're the one providing the money. They're for those providing the money, expecting the outcomes from that expense show show up on some need date, with the needed value that provide them with the ability to earn back the money.

Categories: Project Management

Getting Started on an Agile Project

Software Requirements Blog - Seilevel.com - Tue, 11/25/2014 - 17:00
I’m working with a new customer of ours, helping them with the requirements for an application that they will be building in-house. This customer has decided to give Scrum a try, so I’m also helping this project team make that transition as well. This customer had originally decided that they were going to buy a […]
Categories: Requirements

Local Firm Has Critical Message for Project Managers

Herding Cats - Glen Alleman - Tue, 11/25/2014 - 16:47

Rally Software is a local firm providing tools for the management of agile project. Project Managers provide the glue for all human endeavors involving complex work processes. Rally has those tools as do many others. Rally also has message that needs to be addressed by the project management community. Organizing, planning, executing social projects is one of the roles projects managers can contribute to.

SIM posium 2014 - Denver from Ryan Martens Related articles Estimating Guidance When the Solution to Bad Management is a Bad Solution Measures of Program Performance Should I Be Estimating My Work? Assessing Value Produced By Investments
Categories: Project Management

Software Development Conferences Forecast November 2014

From the Editor of Methods & Tools - Tue, 11/25/2014 - 15:20
Here is a list of software development related conferences and events on Agile ( Scrum, Lean, Kanban) software testing and software quality, programming (Java, .NET, JavaScript, Ruby, Python, PHP) and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods & Tools software development magazine. QCon London, March 2-6 2015, London, UK Exclusive 50 pounds Method & Tools discount with promo code “softdevconf50″ Mobile Dev + Test Conference, April 12-17 2015, San Diego, USA ProgSCon London, April 17 2015, London, UK The Call for Submissions ...

Visualizing My Objectives

NOOP.NL - Jurgen Appelo - Tue, 11/25/2014 - 14:00
OKRs

I love the concept of OKRs (Objectives & Key Results) and while experimenting with this idea at Happy Melly, I’m trying to figure out how to adapt the practice to fit my own context. One thing my team members and I have noticed over the last few months is that it’s hard to remember what we have committed to.

The post Visualizing My Objectives appeared first on NOOP.NL.

Categories: Project Management

Affinity Diagramming with Mute Mapping: The Duct Tape of Quality Tools

Affinity Diagramming With Multi-voting

Affinity Diagramming With Multi-voting

Affinity diagramming is one of my favorite quality techniques. The technique is useful in any situation that requires generating ideas or getting a team talking. After generating ideas the technique provides a platform for organizing those ideas. I have used the technique for any scenario in which brainstorming would be appropriate, such as requirements definition, solution generation, process improvement and retrospectives. Affinity diagramming is generally a group technique with a facilitator acting as an anchor.

You need:

  1. Sticky notes (square, 3 inches by 3 inches)
  2. Flat surface (wall or other flat surface)
  3. White board/Flip Chart and dry erase markers

The process:

  1. The first step in the affinity diagramming process is for the facilitator to generate a set of framing questions that will be used to elicit ideas, comments or statements about area being studied. For example
  2. The second step is brainstorming. The facilitator will use the framing questions to get the team to generate ideas, comments or statements. As the team members think of a comment based on the framing question, they write the idea on a sticky note, call out what was written and then hold it up for the facilitator to post on the wall or other flat surfaces. Lettering on the sticky note should be able to be read from across the room. The brainstorming process goes on until the team has exhausted the subject or the time box is used up. Small sticky notes are used to ensure that each note contains a single, short idea.   Typically a team of 5 – 9 can generate 50 to 100 sticky notes during a 30 minute brainstorming session. The facilitator should ensure everyone participates. The facilitator should shift the seed questions as idea generation slows down.
  3. After the competing of the brainstorming phase, the team goes to the wall (or other surface) and re-arranges the sticky notes without conversation. The goal is to discover the relationships in the data. Time box the mute mapping exercise to about 1/3rd of the time of the brainstorming phase. A team is done re-grouping the data when everyone sits down or one item begins shifting spots without other changes. The facilitator should ensure everyone participates without talking. While atypical, not all items will be able to be grouped. Occasionally one or a few ideas will be outliers.
  4. After completing the mute mapping, the facilitator should walk the team through the groups of ideas that have been created. The facilitator begins listing off the entries in the group and then asking the team to name the group. The goal of this phase in the technique is to validate that each idea belongs in the group. The team is free to move notes to the groups if they fit better somewhere else.

Affinity diagramming with mute mapping is a tool to generate and sort a large number of ideas, comments or concepts into a more manageable order. Affinity diagramming is good for decision making, generating an initial product backlog, generating personas for user stories and retrospectives. Affinity diagramming with mute mapping might not be as versatile as duct tape, BUT it is close!


Categories: Process Management

Sky Force 2014 Reimagined for Android TV

Android Developers Blog - Mon, 11/24/2014 - 22:39
By Jamil Moledina, Games Strategic Partnerships Lead, Google Play

In the coming months, we’ll be seeing more media players, like the recently released Nexus Player, and TVs from partners with Android TV built-in hit the market. While there’s plenty of information available about the technical aspects of adapting your app or game to Android TV, it’s also useful to consider design changes to optimize for the living room. That way you can provide lasting engagement for existing fans as well as new players discovering your game in this new setting. Here are three things one developer did, and how you can do them too.

Infinite Dreams is an indie studio out of Poland, co-founded by hardcore game fans Tomasz Kostrzewski and Marek Wyszyński. With Sky Force 2014 TV, they brought their hit arcade style game to Android TV in a particularly clever way. The mobile-based version of Sky Force 2014 reimaged the 2004 classic by introducing stunning 3D visuals, and a free-to-download business model using in-app purchasing and competitive tournaments to increase engagement. In bringing Sky Force 2014 to TV, they found ways to factor in the play style, play sessions, and real-world social context of the living room, while paying homage to the title’s classic arcade heritage. As Wyszyński puts it, “We decided not to take any shortcuts, we wanted to make the game feel like it was designed to be played on TV.”

Orientation For starters, Sky Force 2014 is played vertically on a smartphone or tablet, also known as portrait mode. In the game, you’re piloting a powerful fighter plane flying up the screen over a scrolling landscape, targeting waves of steampunk enemies coming down at you. You can see far enough up the screen, enabling you to plan your attacks and dodge enemies in advance. Vertical play on the mobile version When bringing the game to TV, the quickest approach would have been to preserve that vertical orientation of the gameplay, by pillarboxing the field of play.

With Sky Force 2014, Infinite Dreams considered their options, and decided to scale the gameplay horizontally, in landscape mode, and recompose the view and combat elements. You’re still aiming up the screen, but the world below and the enemies coming at you are filling out a much wider field of view. They also completely reworked the UI to be comfortably operated with a gamepad or simple remote. From Wyszyński’s point of view, “We really didn't want to just add support for remote and gamepad on top of what we had because we felt it would not work very well.” This approach gives the play experience a much more immersive field of view, putting you right there in the middle of the action. More information on designing for landscape orientation can be found here.

Multiplayer Like all mobile game developers building for the TV, Infinite Dreams had to figure out how to adapt touch input onto a controller. Sky Force 2014 TV accepts both remote control and gamepad controller input. Both are well-tuned, and fighter handling is natural and responsive, but Infinite Dreams didn’t stop there. They took the opportunity to add cooperative multiplayer functionality to take advantage of the wider field of view from a TV. In this way, they not only scaled the visuals of the game to the living room, but also factored in that it’s a living room where people play together. Given the extended lateral patterns of advancing enemies, multiplayer strategies emerge, like “divide and conquer,” or “I got your back” for players of different skill levels. More information about adding controller support to your Android game can be found here, handling controller actions here, and mapping each player’s paired controllers here.
Players battle side by side in the Android TV version
Business Model Infinite Dreams is also experimenting with monetization and extending play session length. The TV version replaces several $1.99 in-app purchases and timers with a try-before-you-buy model which charges $4.99 after playing the first 2 levels for free. We’ve seen this single purchase model prove successful with other arcade action games like Mediocre’s Smash Hit for smartphones and tablets, in which the purchase unlocks checkpoint saves. We’re also seeing strong arcade action games like Vector Unit’s Beach Buggy Racing and Ubisoft’s Hungry Shark Evolution retain their existing in-app purchase models for Android TV. More information on setting up your games for these varied business models can be found here. We’ll be tracking and sharing these variations in business models on Android TV, including variations in premium, as the Android TV platform grows.

Reflecting on the work involved in making these changes, Wyszyński says, “From a technical point of view the process was not really so difficult – it took us about a month of work to incorporate all of the features and we are very happy with the results.” Take a moment to check out Sky Force 2014 TV on a Nexus Player and the other games in the Android TV collection on Google Play, most of which made no design changes and still play well on a TV. Consider your own starting point, take a look at the Android TV section on our developer blog, and build the version of your game that would be most satisfying to players on the couch.

Join the discussion on

+Android Developers
Categories: Programming

CITCON Europe 2014 wrap-up

Xebia Blog - Mon, 11/24/2014 - 19:33

On the 19th and 20th of September CITCON (pronounced "kit-con") took place in Zagreb, Croatia. CITCON is dedicated to continuous integration and testing. It brings together some of the most interesting people of the European testing and continuous integration community. These people also determine the topics of the conference.

They can do this because CITCON is an Open Space conference. If you're not familiar with the concept of Open Space, check out Wikipedia. On Friday evening, attendees can pitch their proposals. Through dot voting and (constant) shuffling of the schedule, the attendees create their conference program.

In this post we'll wrap up a few topics that were discussed.

Polytesting

Peter Zsoldos (@zsepi) went into his most recent brain-spin: polytesting. If I have a set of requirements, is it feasible to apply those requirements at different levels of my application; say, component, integration and UI level. This sounds very appealing because you can perform ATDD at different levels.
This approach is particularly interesting because it has the potential to keep you focused on the required functionality all the way. You'll need good, concrete requirements for this to work.

Microservices

Microservices are a hot topic nowadays. The promise of small, isolated units with clear interfaces is tempting. There are generally two types of architectures that can be applied. The most common one is where there is no central entity, and services communicate to each other directly.

Douglas Squirrel (@douglasquirrel) explained an alternative architecture by using a central pub-sub "database" to which each service is allowed to publish "facts". Douglas deliberately used the term facts to describe single items that are considered true at a specific point in time ("events"" is too generic a term).

The second model comes closer to mechanisms such as event sourcing (or even ESBs if you take it to the extreme). One of the advantages of this approach is that, because facts are stored, it's possible to construct new functionality based on existing facts. For example, you could use this functionality in a game to create leader boards and, at a later stage, create leaderboards per continent, country, or whatever seems fit.

Unit testing

Arjan Molenaar introduced a flaming hot topic this year: "unit testing is a waste". Inspired by recent discussions of DHH, Martin Fowler, and Kent Beck, Arjan tried to find out the opinions of the CITCON crowd. Most of the people contributing to the discussion must have been working in consultancy, because the main conclusion was "It depends".

Whether unit testing is worth the effort mainly depends on the goals that people try to achieve when writing their unit tests. Some people write them from a TDD perspective. They use tests to guide themselves through development cycles, making sure they haven't made little errors. If this helps you, then please keep doing it! If it does not really help, well ...

Other people write unit tests from a regression perspective, or at least maintain them for regression testing. This part caused the most discussion. How useful are unit tests for regression testing purposes? Are you really catching regression if you isolate a single unit?

The growing interest in microservices also sheds new light on this discussion. In the future, when microservices will be widely adopted, we will be working with much smaller codebases. They might be so small and clear that unit tests are no longer required to guide us through the development process.

CI scaling

Another trending topic was scaling CI systems. It was good to see that the ideas we have at Xebia were consistent with the ideas we heard at CITCON. First off, the solution for everything (and world peace, it seems) is microservices. Unfortunately, some of us, for the time being, must deal with monolithic codebases. Luckily there are still options for your growing CI system, even though for now it remains one big chunk of code.

The staged pipeline: you test the things most likely to break first. Basically, you break your test suite up into multiple test suites and run them at separate stages in the pipeline.

But how do you determine what is most likely to break and what to test where? Tests that are most likely to break are those that are closely linked to the code changes, so run them first. Also, determine high-risk areas for your application. These areas can be identified based on trends (in failing tests) or simply through human analysis. To determine where to run the different test suites is mainly a matter of speed versus confidence. You want fast feedback so you don't want to push all your tests to the end of the pipeline. But you also don't want to wait forever before you know you can move on to the next thing.

Beer brewing for process refinement

Who isn't interested in beer brewing? Tom Denley (@scarytom) proposed a session on home brewing and the analogy. Because Arjan is a homebrewer himself, this seemed like an obvious session for him.

In addition to Tom explaining the process of brewing, we discussed how we got into brewing. In both cases, the first brew was made with a can of hopped malt syrup, adding yeast, and producing a beer from there. For your second beer, you replace the can of syrup with malt extract powder and dark malt (for flavour). At a later stage, you can replace the malt extract with ground malt.

What we basically do is start with the end in mind. If you're starting with continuous delivery, it is considered good practice to do the same: get your application deployed in production as soon as possible and optimise your process from your deployed app back to source code.

Again, it was a good conference with some nice take-aways. Next year's episode will most likely take place in Finland. The year after that... The Netherlands?

Testing cheatsheet

Xebia Blog - Mon, 11/24/2014 - 19:00

Sometimes it is not clear for everybody how unit tests relates to e2e-test. This cheatsheet, I created, describes in one page:

  1. The different definitions
  2. Different structures of the tests
  3. The importance of unit tests
  4. The importance of e2e tests
  5. External versus internal quality
  6. E2E and unit tests living next to each other

Feel free to download and use it in your project if you feel there is a confusion of tongues between unit and e2e tests.

Download: TestingCheatSheet

 

A Flock of Tasty Sources on How to Start Learning High Scalability

This is a guest repost by Leandro Moreira.

distributed systems

When we usually are interested about scalability we look for links, explanations, books, and references. This mini article links to the references I think might help you in this journey.

DISCLAIMER:

You don’t need to have N machines to build/test a cluster/high scalable system, currently you can use Vagrant and up N machines easily.

THE REFERENCES:

Now that you know you can empower yourself with virtual servers, I challenge you to not only read these links but put them into practice.

Good questions to test your knowledge:

Categories: Architecture

Mike Cohn's Agile Quotes

Herding Cats - Glen Alleman - Mon, 11/24/2014 - 17:30

Screen Shot 2014-11-21 at 1.17.19 PMMike Cohn of Mountain Goat Software has a collection of 101 Agile Quotes

There are few I have heart burn with, but the vast majority are right on. 

Some of my favorite:

  • Planning is everything, plans are nothing - Field Marshall Helmuth von Moltke. This is a much misused quote. In the military business, like all businesses that spend lots of money, have high risk, and high reward, we need a plan. That plan is a strategy for the success of the project, be it D-Day or an ERP deployment. That strategy is actually a hypothesis, and the hypothesis needs to have tests. That's what the plan describes. The tests that confirm the strategy is working. To conduct the tests, we need to perform work. When the test shows to strategy is not working, we need a new strategy. That is we change the plan.
  • To be uncertain is to be uncomfortable, but to certain is to be ridiculous - Chinese Proverb. Another misused quote. All project work is uncertain. Managing in the presence of uncertainty is part of good management. Uncertainty creates risk, and risk management is how adults manage projects - Time Lister.
  • Scrum without automation is like driving a sports car on a dirt track - you won't experience the full potential, you will get frustrated, and you will probably end up blaming the car - Ilan Goldstein - tools are the basis of all process and process improvement. Paper on the wall, software management tools. Those who suggest that tools are ruining agile aren't working on complex project.
  • If you define the problem successfully, you almost have the solution - Steve Jobs. This is the role of Plans, Integrated Master Planning in our domain, where the outcomes are described in units of Effectiveness and Performance in an increasing maturity cycle.
Categories: Project Management

I Spoke at Oredev This Year

Making the Complex Simple - John Sonmez - Mon, 11/24/2014 - 17:00

I’ve been waiting to put this post up until the videos from my talks at Oredev were put online, but since two of the three are online, and I wasn’t sure if the third was actually going to go up, I decided to go ahead and put up the post now. I don’t speak at […]

The post I Spoke at Oredev This Year appeared first on Simple Programmer.

Categories: Programming

Quote of the Month November 2014

From the Editor of Methods & Tools - Mon, 11/24/2014 - 14:57
Walking on water and developing software from a specification are easy if both are frozen. Source: Edward V. Berard (1993) Essays on object-oriented software engineering. Volume 1, Prentice Hall

Book Celebration (and Invitation)!

NOOP.NL - Jurgen Appelo - Mon, 11/24/2014 - 11:51
Management 3.0 #Workout

By now all the different editions of my new book Management 3.0 #Workout are finished and published. The book, easily the most colorful and practical management book in the world, is available as PDF, ePub, Kindle and in a printed edition. And, although writing a book is great fun, finishing a book feels even better! Especially since it’s worth a little celebration. If you’ve read any of my work, you know I love to celebrate. :o)

The post Book Celebration (and Invitation)! appeared first on NOOP.NL.

Categories: Project Management

f(by) Minsk 2014

Phil Trelford's Array - Mon, 11/24/2014 - 09:40

This weekend Evelina, Yan an I had the pleasure of speaking at f(by) the first dedicated functional conference in Belarus. It was a short hop by train from Vilnius to Minsk, where we had been attending Build Stuff. Sergey Tihon, of F# Weekly fame, was waiting for us at the train station to guide us to the hotel with a short tour of the city.

The venue was a large converted loft space, by the river and not far from the central station, with great views over the city. The event attracted over 100 developers from across the region, and we were treated to tea and tasty local cakes during the breaks.

FuncBy Group Photo

Evelina was the first of us to speak and got a great response to her talk on Understanding Social Networks with F#.

Evelina at f(by)

The slides and samples are available on Evelina’s github repository.

Next up Yan presented Learn you to tame complex APIs with F# powered DSLs:

Learn you to tame complex APIs with F#-powered DSLs from Yan Cui

My talk was another instalment of F# Eye for the C# Guy.

Unicorns

The talk introduces F# from the perspective of a C# developer using live samples covering syntax, F#/C# interop, unit testing, data access via F# Type Providers and F# to JS with FunScript.

In one example we looked at CO2 emissions using World Bank data (using FSharp.Data) in a line chart (using FSharp.Charting):

[gb;uk;by] => (fun i -> i.``CO2 emissions (kg per 2005 PPP $ of GDP)``)

 

CO2 emissions[3]

Many thanks to Alina for inviting us, and the Minsk F# community for making us feel very welcome.

Categories: Programming

SPaMCAST 317 – Questions, Answers and Controversy, Robust Software

www.spamcast.net

http://www.spamcast.net

Listen to the Software Process and Measurement Podcast

SPaMCAST 317 tackles a wide range of frequently asked questions, ranging from the possibility of an acceleration trap, the relevance of function points, whether teams have a peak loads and safe to fail experiments. Questions, answers and controversy!

We will also have the next installment of Kim Pries’s column, The Software Sensei! This week Kim discusses robust software.

The essay starts with “Agile Can Contribute to an Acceleration Trap”

I am often asked whether Agile techniques contribute to an acceleration trap in IT.  In an article in The Harvard Business Review, Bruch and Menges (April 2010) define an acceleration trap as the malaise that sets in as an organization fails prey to chronic overloading. It can be interpreted as laziness or recalcitrance, which then elicits even more pressure to perform, generating an even deeper malaise. The results of the pressure/malaise cycle are generally a poor working atmosphere and employee loss. Agile can contribute to an acceleration trap but only as a reflection of poor practices. Agile is often perceived to induce an acceleration trap in two manners: organizational change and delivery cadence.

Listen to the rest now

Call to action!

We are in the middle of a re-read of John Kotter’s classic Leading Change of on the Software Process and Measurement Blog.  Are you participating in the re-read? Please feel free to jump in and add your thoughts and comments!

After we finish the current re-read will need to decide which book will be next.  We are building a list of the books that have had the most influence on readers of the blog and listeners to the podcast.  Can you answer the question?

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com.

First, we will compile a list and publish it on the blog.  Second, we will use the list to drive future  “Re-read” Saturdays. Re-read Saturday is an exciting new feature that began on the Software Process and Measurement blog on November 8th.  Feel free to choose you platform; send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Next

SPaMCAST 318 features our interview with Rob Cross.  Rob and I discussed his INFOQ article “How to Incorporate Data Analytics into Your Software Process.”  Rob provides ideas on how the theory of big data can be incorporated in to big action.

 

Upcoming Events

DCG Webinars:

Agile Risk Management – It Is Still Important
Date: December 18th, 2014
Time: 11:30am EST

Register Now

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management