Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://softdevblogs.com/' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Software Development Conferences Forecast August 2016

Here is a list of software development related conferences and events on Agile project management ( Scrum, Lean, Kanban), software testing and software quality, software architecture, programming (Java, .NET, JavaScript, Ruby, Python, PHP), DevOps and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods […]

Five Different Management Styles

Attention Dogs

Delegative management?

During a keynote speech at a conference I recently attended I listened to a Phillip Lew challenge the orthodoxy of Agile’s over-reliance on group decision making (participative management) styles.  Agile teams are typically built a presumption that group decision making is all that is needed to deliver value to the customer. Group decision making is often compared to classic command and control forms (autocratic) of management. In many cases, autocratic management is portrayed as the antithesis of Agile which casts the discussion in terms of good and evil.  Casting the discussion of management styles in terms of good and evil is probably an overstatement and just believing that are there only two management styles is a miss-statement. Let’s deal with the miss-statement first. There are at least five common management styles. Before we can wrestle with whether Agile only works if a pure participative management is used we need to agree on a few definitions.

Autocratic management, also known as directive or command and control, is based on leader assigning work to subordinates and then overseeing the completion of the work. Taylorism, discussed in the re-read of XP Explained, is a form of autocratic management.

Delegative management is a relatively hands-off approach in which a manager hands off a piece of work to the team or lower level manager and lets them deal with it. The hand-off of work includes the responsibility for accomplishing the work. This form of management can be thought of as fire-and-forget.  Assignments, once made, are left to be performed with a minimum of supervision.

Negotiative management features interactions to reach a common agreement. Negotiative management can be practiced between individuals or groups.  Decisions are often based on the power differentials between the participants and their negotiating abilities.

Participative management encourages all people involved in the work to contribute to organizing work and developing solutions. This management style works best when it leverages the diversity of knowledge and experience of the participants. Decisions under this style of management is typically marked by group consensus. This is the most prominent management style seen at the team level in Agile, which highlights self-organizing and self-managing groups.

Consultative management is a hybrid form of management that combines participative and autocratic forms of management.  The leader involves participants as an input, but ultimately the leader then makes the final decision.  When I was in the corporate world this was a typical management approach because preserves a clear chain of responsibility.  

None of these management styles is prima facie good or bad.  They are contextually useful.  For example, in the middle of a traffic accident, a driver will not have time to evoke participative management, but rather will need to make and implement decisions autocratically.  Each management type has strengths, weaknesses and can be applied effectively in specific contexts.

 

Next:

  • Leadership versus Management
  • Management Styles in Agile Teams
  • Management Styles in Scaled Agile
  • Servant Leaders, Revisted

Categories: Process Management

What Are Story Points?

Mike Cohn's Blog - Tue, 08/23/2016 - 15:00

Story points are a unit of measure for expressing an estimate of the overall effort that will be required to fully implement a product backlog item or any other piece of work.

When we estimate with story points, we assign a point value to each item. The raw values we assign are unimportant. What matters are the relative values. A story that is assigned a 2 should be twice as much as a story that is assigned a 1. It should also be two-thirds of a story that is estimated as 3 story points.

Instead of assigning 1, 2 and 3, that team could instead have assigned 100, 200 and 300. Or 1 million, 2 million and 3 million. It is the ratios that matter, not the actual numbers.

What Goes Into a Story Point?

Because story points represent the effort to develop a story, a team’s estimate must include everything that can affect the effort. That could include:

  • The amount of work to do
  • The complexity of the work
  • Any risk or uncertainty in doing the work

When estimating with story points, be sure to consider each of these factors. Let’s see how each impacts the effort estimate given by story points.

The Amount of Work to Do

Certainly, if there is more to do of something, the estimate of effort should be larger. Consider the case of developing two web pages. The first page has only one field and a label asking to enter a name. The second page has 100 fields to also simply be filled with a bit of text.

The second page is no more complex. There are no interactions among the fields and each is nothing more than a bit of text. There’s no additional risk on the second page. The only difference between these two pages is that there is more to do on the second page.

The second page should be given more story points. It probably doesn’t get 100 times more points even though there are 100 times as many fields. There are, after all, economies of scale and maybe making the second page is only 2 or 3 or 10 times as much effort as the first page.

Risk and Uncertainty

The amount of risk and uncertainty in a product backlog item should affect the story point estimate given to the item.

If a team is asked to estimate a product backlog item and the stakeholder asking for it is unclear about what will be needed, that uncertainty should be reflected in the estimate.

If implementing a feature involves changing a particular piece of old, brittle code that has no automated tests in place, that risk should be reflected in the estimate.

Complexity

Complexity should also be considered when providing a story point estimate. Think back to the earlier example of developing a web page with 100 trivial text fields with no interactions between them.

Now think about another web page also with 100 fields. But some are date fields with calendar widgets that pop up. Some are formatted text fields like phone numbers or Social Security numbers. Other fields do checksum validations as with credit card numbers.

This screen also requires interactions between fields. If the user enters a Visa card, a three-digit CVV field is shown. But if the user enters an American Express card, a four-digit CVV field is shown.

Even though there are still 100 fields on this screen, these fields are harder to implement. They’re more complex. They’ll take more time. There’s more chance the developer makes a mistake and has to back up and correct it.

This additional complexity should be reflected in the estimate provided.

Consider All Factors: Amount of Work, Risk and Uncertainty, and Complexity

It may seem impossible to combine three factors into one number and provide that as an estimate. It’s possible, though, because effort is the unifying factor. Estimators consider how much effort will be required to do the amount of work described by a product backlog item.

Estimators then consider how much effort to include for dealing with the risk and uncertainty inherent in the product backlog item. Usually this is done by considering the risk of a problem occurring and the impact if the risk does occur. So, for example, more will be included in the estimate for a time-consuming risk that is likely to occur than for a minor and unlikely risk.

Estimators also consider the complexity of the work to be done. Work that is complex will require more thinking, may require more trial-and-error experimentation, perhaps more back-and-forth with a customer, may take longer to validate and may need more time to correct mistakes.

All three factors must be combined.

Consider Everything in the Definition of Done

A story point estimate must include everything involved in getting a product backlog item all the way to done. If a team’s definition of done includes creating automated tests to validate the story (and that would be a good idea), the effort to create those tests should be included in the story point estimate.

Story points can be a hard concept to grasp. But the effort to fully understand that points represent effort as impacted by the amount of work, the complexity of the work and any risk or uncertainty in the work will be worth it.

Connecting Project Benefits to Business Strategy for Success

Herding Cats - Glen Alleman - Tue, 08/23/2016 - 05:01

The current PMI Pulse titled Delivering Value: Focus on Benefits during Project Execution provides some guidance on how to manage the benefits side of an IT project. But the article misses the mark on an important concept. This is a chart in the paper, suggesting the metrics of the benefits.

But where do these metrics come from?

Screen Shot 2016-08-16 at 1.00.29 PM

The question is where do the measures of the benefits listed in the above chart come from? 

The answer is they come from the Strategy of the IT function. Where is the strategy defined? The answer is in the Balanced Scorecard. This is how ALL connections are made in enterprise IT projects. Why are we doing something? How will we recognize that it's the right thing to do? What are the measures of the outcomes connected to each other and connected to the top level strategy to the strategic needs of the firm.

When you hear we can't forecast the benefits in the future from our work, you can count on the firm spending a pile of money for probably not much value. Follow the steps starting on page 47 in the presentation above and build the 4 perspectives and connect the initiatives.
  • Stakeholder - what does the business need in terms of beneficial outcomes?
  • Internal Processes - what governance processes will be used to produce these outcomes?
  • Learings and Growth - what people, information, and organizational elements will be needed to execute the process to produce the benefical outcomes?
  • Budget - what are you willing to spend to achieve these beneficial outcomes

As always, each of these is a random variable operating in the presence of uncertanty, creating risk that they will not be achieved. As always, this means making estimates of both the beneficial outcomes and the cost to achieve them. 

Like all non-trivial projects, estimating is a critical success factor. Uncertainty is unavoidable. Making decisions in the presence of uncertanty is unavoidable. Having some hope that the decision will result in a beneficial outcomes requires making estimates of that outcome and choosing the most likely beneficial outcome.  

Anyone telling you otherwise is working in a de-minimis project. 

So Let's Apply These Principles to a Recent Post

A post Uncertainty of benefits versus costs, has some ideas that need addressing ...

  • Return of an investment is the benefits minus the costs. 
    • And both are random variables subject to reducible and irreducible uncertanties.
    • Start by building a model of these uncertainties.
    • Apply that model and update it with data from the project as it proceeds.
  • Most people focus way too much on costs and not enough on benefits.
    • Why? This is bad management. This is naive management. Stop doing stupid things on purpose.
    • Risk Management (from the underlying uncertainties) is how adults manage projects - Tim Lister.
    • Behave like an adult, manage the risk.
  • If you are working on anything innovative, your benefit uncertainty is crazy high.
    • Says who?
    • If you don't have some statistically confident sense of what the pay off is going to be, you'd better be ready to spend money to find out before you spend all the money.
    • This is naive project management and naive business management.
    • It's counter to the first bullet - ROI = (Value - Cost)/Cost. 
    • Having an acceptable level of confidence in both Value and Cost is part of Adult Management of other people's money.
  • But we can’t estimate based on data, it has to be guesses!
    • No estimates, are not guesses unless done by a Child. 
    • Estimates ARE based on data. This is called Reference Class Forecasting. Also parametric models use past performance to project future performance.
    • If Your cost estimation might be off by +/- 50%, but your benefit estimation could be off by +/-95% (or more), you're pretty much clueless about what the customer wants. Or you're spending money on a R&D project to find out. This is one of those examples conjectured by inexperienced estimators. This is not how it works in any mature firm.
    • Adults don't guess, they estimate.
    • Adults know how to estimate. Lots of books, papers, and tools.
  • So we should all stop doing estimates, right?
    • No - an estimate is a forecast and a commitment.
    • The commitment MUST have a confidence level.
    • We have 80% confidence of launching on or before the 3rd week in November 2014 for 4 astronauts in our vehicle to the International Space station. This was a VERY innovative system. This is why a contract for $3.5B was awarded. This approach is applicable to ALL projects
    • Any de minimis projects have not deadline or a Not to Exceed target cost.

All projects are probabilistic. All projects have uncertainty in cost and benefits. Estimating both cost and benefit, continuously updating those estimates, and taking action to correct unfavorable variances from plan, is how adults manage projects.

  Related articles Strategy is Not the Same as Operational Effectiveness Decision Making Without Estimates? Local Firm Has Critical Message for Project Managers Architecture -Center ERP Systems in the Manufacturing Domain The Purpose Of Guiding Principles in Project Management Herding Cats: Large Programs Require Special Processes Herding Cats: The Problems with Schedules Quote of the Day The Cost Estimating Problem Estimating Processes in Support of Economic Analysis
Categories: Project Management

Taking the final wrapper off of Android 7.0 Nougat

Android Developers Blog - Tue, 08/23/2016 - 00:24

Posted by Dave Burke, VP of Engineering

Android Nougat

Android 7.0 Nougat

Today, Android 7.0 Nougat will begin rolling out to users, starting with Nexus devices. At the same time, we’re pushing the Android 7.0 source code to the Android Open Source Project (AOSP), extending public availability of this new version of Android to the broader ecosystem.

We’ve been working together with you over the past several months to get your feedback on this release, and also to make sure your apps are ready for the users who will run them on Nougat devices.

What’s inside Nougat

Android Nougat reflects input from thousands of fans and developers like you, all around the world. There are over 250 major features in Android Nougat, including VR Mode in Android. We’ve worked at all levels of the Android stack in Nougat — from how the operating system reads sensor data to how it sends pixels to the display — to make it especially built to provide high quality mobile VR experiences.

Plus, Nougat brings a number of new features to help make Android more powerful, more productive and more secure. It introduces a brand new JIT/AOT compiler to improve software performance, make app installs faster, and take up less storage. It also adds platform support for Vulkan, a low-overhead, cross-platform API for high-performance, 3D graphics. Multi-Window support lets users run two apps at the same time, and Direct Reply so users can reply directly to notifications without having to open the app. As always, Android is built with powerful layers of security and encryption to keep your private data private, so Nougat brings new features like File-based encryption, seamless updates, and Direct Boot.

You can find all of the Nougat developer resources here, including details on behavior changes and new features you can use in your apps. An overview of what's new for developers is available here, and you can explore all of the new user features in Nougat here.

Multi-window mode in Android Nougat

Multi-window mode in Android Nougat

The next wave of users

Starting today and rolling out over the next several weeks, the Nexus 6, Nexus 5X, Nexus 6P, Nexus 9, Nexus Player, Pixel C, and General Mobile 4G (Android One) will get an over-the-air software update to Android 7.0 Nougat. Devices enrolled in the Android Beta Program will also receive this final version.

And there are many tasty devices coming from our partners running Android Nougat, including the upcoming LG V20, which will be the first new smartphone that ships with Android Nougat, right out of the box.

With all of these new devices beginning to run Nougat, now is the time to publish your app updates to Google Play. We recommend compiling against, and ideally targeting, API 24. If you’re still testing some last minute changes, a great strategy to do this is using Google Play’s beta testing feature to get early feedback from a small group of users — including those using Android 7.0 Nougat — and then doing a staged rollout as you release the updated app to all users.

What’s next for Nougat?

We’re moving Nougat into a new regular maintenance schedule over the coming quarters. In fact, we’ve already started work on the first Nougat maintenance release, that will bring continued refinements and polish, and we’re planning to bring that to you this fall as a developer preview. Stay tuned!

We’ll be closing open bugs logged against Developer Preview builds soon, but please keep the feedback coming! If you still see an issue that you filed in the preview tracker, just file a new issue against Android 7.0 in the AOSP issue tracker.

Thanks for being part of the preview, which we shared earlier this year with an eye towards giving everyone the opportunity to make the next release of Android stronger. Your continued feedback has been extremely beneficial in shaping this final release, not just for users, but for the entire Android ecosystem.

Categories: Programming

Modernizing OAuth interactions in Native Apps for Better Usability and Security

Google Code Blog - Mon, 08/22/2016 - 22:29

Posted by William Denniss, Product Manager, Identity and Authentication

The Identity team is constantly striving to help Google users sign-in to third-party applications with their Google account in a secure and seamless way, and enable users to share select information from their account such as their calendar or contact information with other apps, when they wish to do so.

Under the hood these interactions happen via OAuth requests, and over the years Google has supported a number of ways for developers to implement OAuth flows with us. With improved security and usability in mind, we will soon be ending the support for one of these ways. In the coming months, we will no longer allow OAuth requests to Google in embedded browsers known as “web-views”, such as the WebView UI element on Android and UIWebView/WKWebView on iOS, and equivalents on Windows and OS X.
Using the device browser for OAuth requests instead of an embedded web-view can improve the usability of your apps significantly: users only need to sign-in to Google once per device, improving conversion rates of sign-in and authorization flows in your app. Modern “in-app browser tab” patterns available on some operating systems, such as Chrome Custom Tabs on Android and SFSafariViewController on iOS offer further UX improvements for browser-based OAuth flows.

In contrast, the outdated method of using embedded browsers for OAuth means a user must sign-in to Google each time, instead of using the existing logged-in session from the device. The device browser also provides improved security as apps are able to inspect and modify content in a web-view, but not content shown in the browser.

To help you migrate, we offer libraries and samples that follow modern best practices which you can use:

  • Google Sign-In for Android and iOS, our recommended SDK for sign-in and OAuth with Google Accounts.
  • AppAuth for Android, iOS, and OS X, an open source OAuth client library that can be used with Google and other OAuth providers. We also offer GTMAppAuth (for iOS and OS X), a library which enables AppAuth support for the Google APIs Client Library for Objective-C, and the GTM Session Fetcherprojects.
  • Google Sign-in and OAuth Examples for Windows, examples demonstrating how to use the browser to authenticate Google users in various Windows environments such as Universal Windows Platform (UWP), console and desktop apps.

You can also read protocol-level documentation for our standards-based support of OAuth for Native Apps, and an IETF best current practice draft on this topic.

Versions of Google Sign-In on iOS prior to version 3.0 don’t support the current industry best practices of the in-app browser tab, and therefore are also deprecated. If you use Google Sign-In, please update to the latest version to get all the recent security and usability improvements. For now, this policy does not remove our support of WebView on iOS 8, however we may start to display notices encouraging users to upgrade their device for better security.

The rollout schedule for the deprecation of web-views for OAuth requests to Google is as follows. Starting October 20, 2016, we will prevent new OAuth clients from using web-views on platforms with a viable alternative, and will phase in user-facing notices for existing OAuth clients. On April 20, 2017, we will start blocking OAuth requests using web-views for all OAuth clients on platforms where viable alternatives exist.

If you have any questions with the migration, please post to Stack Overflow tagged with “google-oauth”.

Categories: Programming

Neo4j/scikit-learn: Calculating the cosine similarity of Game of Thrones episodes

Mark Needham - Mon, 08/22/2016 - 22:12

A couple of months ago Praveena and I created a Game of Thrones dataset to use in a workshop and I thought it’d be fun to run it through some machine learning algorithms and hopefully find some interesting insights.

The dataset is available as CSV files but for this analysis I’m assuming that it’s already been imported into neo4j. If you want to import the data you can run the tutorial by typing the following into the query bar of the neo4j browser:

:play http://guides.neo4j.com/got

Since we don’t have any training data we’ll be using unsupervised learning methods, and we’ll start simple by calculating the similarity of episodes based character appearances. We’ll be using scitkit-learn‘s cosine similarity function to determine episode similarity.

Christian Perone has an excellent blog post explaining how to use cosine similarity on text documents which is well worth a read. We’ll be using a similar approach here, but instead of building a TF/IDF vector for each document we’re going to create a vector indicating whether a character appeared in an episode or not.

e.g. imagine that we have 3 characters – A, B, and C – and 2 episodes. A and B appear in the first episode and B and C appear in the second episode. We would represent that with the following vectors:

Episode 1 = [1, 1, 0]
Episode 2 = [0, 1, 1]

We could then calculate the cosine similarity between these two episodes like this:

>>> from sklearn.metrics.pairwise import cosine_similarity
>>> one = [1,1,0]
>>> two = [0,1,1]
 
>>> cosine_similarity([one, two])
array([[ 1. ,  0.5],
       [ 0.5,  1. ]])

So this is telling us that Episode 1 is 100% similar to Episode 1, Episode 2 is 100% similar to itself as well, and Episodes 1 and 2 are 50% similar to each other based on the fact that they both have an appearance of Character B.

Note that the character names aren’t even mentioned at all, they are implicitly a position in the array. This means that when we use our real dataset we need to ensure that the characters are in the same order for each episode, otherwise the calculation will be meaningless!

In neo4j land we have an APPEARED_IN relationship between a character and each episode that they appeared in. We can therefore write the following code using the Python driver to get all pairs of episodes and characters:

from neo4j.v1 import GraphDatabase, basic_auth
driver = GraphDatabase.driver("bolt://localhost", auth=basic_auth("neo4j", "neo"))
session = driver.session()
 
rows = session.run("""
    MATCH (c:Character), (e:Episode)
    OPTIONAL MATCH (c)-[appearance:APPEARED_IN]->(e)
    RETURN e, c, appearance
    ORDER BY e.id, c.id""")

We can iterate through the rows to see what the output looks like:

>>> for row in rows:
        print row
 
<Record e=<Node id=6780 labels=set([u'Episode']) properties={u'season': 1, u'number': 1, u'id': 1, u'title': u'Winter Is Coming'}> c=<Node id=5415 labels=set([u'Character']) properties={u'name': u'Addam Marbrand', u'id': u'/wiki/Addam_Marbrand'}> appearance=None>
<Record e=<Node id=6780 labels=set([u'Episode']) properties={u'season': 1, u'number': 1, u'id': 1, u'title': u'Winter Is Coming'}> c=<Node id=5882 labels=set([u'Character']) properties={u'name': u'Adrack Humble', u'id': u'/wiki/Adrack_Humble'}> appearance=None>
<Record e=<Node id=6780 labels=set([u'Episode']) properties={u'season': 1, u'number': 1, u'id': 1, u'title': u'Winter Is Coming'}> c=<Node id=6747 labels=set([u'Character']) properties={u'name': u'Aegon V Targaryen', u'id': u'/wiki/Aegon_V_Targaryen'}> appearance=None>
<Record e=<Node id=6780 labels=set([u'Episode']) properties={u'season': 1, u'number': 1, u'id': 1, u'title': u'Winter Is Coming'}> c=<Node id=5750 labels=set([u'Character']) properties={u'name': u'Aemon', u'id': u'/wiki/Aemon'}> appearance=None>
<Record e=<Node id=6780 labels=set([u'Episode']) properties={u'season': 1, u'number': 1, u'id': 1, u'title': u'Winter Is Coming'}> c=<Node id=5928 labels=set([u'Character']) properties={u'name': u'Aeron Greyjoy', u'id': u'/wiki/Aeron_Greyjoy'}> appearance=None>
<Record e=<Node id=6780 labels=set([u'Episode']) properties={u'season': 1, u'number': 1, u'id': 1, u'title': u'Winter Is Coming'}> c=<Node id=5503 labels=set([u'Character']) properties={u'name': u'Aerys II Targaryen', u'id': u'/wiki/Aerys_II_Targaryen'}> appearance=None>
<Record e=<Node id=6780 labels=set([u'Episode']) properties={u'season': 1, u'number': 1, u'id': 1, u'title': u'Winter Is Coming'}> c=<Node id=6753 labels=set([u'Character']) properties={u'name': u'Alannys Greyjoy', u'id': u'/wiki/Alannys_Greyjoy'}> appearance=None>
<Record e=<Node id=6780 labels=set([u'Episode']) properties={u'season': 1, u'number': 1, u'id': 1, u'title': u'Winter Is Coming'}> c=<Node id=6750 labels=set([u'Character']) properties={u'name': u'Alerie Tyrell', u'id': u'/wiki/Alerie_Tyrell'}> appearance=None>
<Record e=<Node id=6780 labels=set([u'Episode']) properties={u'season': 1, u'number': 1, u'id': 1, u'title': u'Winter Is Coming'}> c=<Node id=5753 labels=set([u'Character']) properties={u'name': u'Alliser Thorne', u'id': u'/wiki/Alliser_Thorne'}> appearance=None>
<Record e=<Node id=6780 labels=set([u'Episode']) properties={u'season': 1, u'number': 1, u'id': 1, u'title': u'Winter Is Coming'}> c=<Node id=5858 labels=set([u'Character']) properties={u'name': u'Alton Lannister', u'id': u'/wiki/Alton_Lannister'}> appearance=None>

Next we’ll build a ‘matrix’ of episodes/characters. If a character appears in an episode then we’ll put a ‘1’ in the matrix, if not we’ll put a ‘0’:

episodes = {}
for row in rows:
    if episodes.get(row["e"]["id"]) is None:
        if row["appearance"] is None:
            episodes[row["e"]["id"]] = [0]
        else:
            episodes[row["e"]["id"]] = [1]
    else:
        if row["appearance"] is None:
            episodes[row["e"]["id"]].append(0)
        else:
            episodes[row["e"]["id"]].append(1)

Here’s an example of one entry in the matrix:

>>> len(episodes)
60
 
>>> len(episodes[1])
638
 
>>> episodes[1]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]

From this output we learn that there are 60 episodes and 638 characters in Game of Thrones so far. We can also see which characters appeared in the first episode, although it’s a bit tricky to work out which index in the array corresponds to each character.

The next thing we’re going to do is calculate the cosine similarity between episodes. Let’s start by seeing how similar the first episode is to all the others:

>>> all = episodes.values()
 
>>> cosine_similarity(all[0:1], all)[0]
array([ 1.        ,  0.69637306,  0.48196269,  0.54671752,  0.48196269,
        0.44733753,  0.31707317,  0.42340087,  0.34989921,  0.43314808,
        0.36597766,  0.18421252,  0.30961158,  0.2328101 ,  0.30616181,
        0.41905818,  0.36842504,  0.35338088,  0.18376917,  0.3569686 ,
        0.2328101 ,  0.34539847,  0.25043516,  0.31707317,  0.25329221,
        0.33342786,  0.34921515,  0.2174909 ,  0.2533473 ,  0.28429311,
        0.23026565,  0.22310537,  0.22365301,  0.23816275,  0.28242289,
        0.16070148,  0.24847093,  0.21434648,  0.03582872,  0.21189672,
        0.15460414,  0.17161693,  0.15460414,  0.17494961,  0.1234662 ,
        0.21426863,  0.21434648,  0.18748505,  0.15308091,  0.20161946,
        0.19877675,  0.30920827,  0.21058466,  0.19127301,  0.24607943,
        0.18033393,  0.17734311,  0.16296707,  0.18740851,  0.23995201])

The first entry in the array indicates that episode 1 is 100% similar to episode 1 which is a good start. It’s 69% similar to episode 2 and 48% similar to episode 3. We can sort that array to work out which episodes it’s most similar to:

>>> for idx, score in sorted(enumerate(cosine_similarity(all[0:1], all)[0]), key = lambda x: x[1], reverse = True)[:5]:
        print idx, score
 
0 1.0
1 0.696373059207
3 0.546717521051
2 0.481962692712
4 0.481962692712

Or we can see how similar the last episode of season 6 is compared to the others:

>>> for idx, score in sorted(enumerate(cosine_similarity(all[59:60], all)[0]), key = lambda x: x[1], reverse = True)[:5]:
        print idx, score
 
59 1.0
52 0.500670191678
46 0.449085146211
43 0.448218732478
49 0.446296233312

I found it a bit painful exploring similarities like this so I decided to write them into neo4j instead and then write a query to find the most similar episodes. The following query creates a SIMILAR_TO relationship between episodes and sets a score property on that relationship:

>>> episode_mapping = {}
>>> for idx, episode_id in enumerate(episodes):
        episode_mapping[idx] = episode_id
 
>>> for idx, episode_id in enumerate(episodes):
        similarity_matrix = cosine_similarity(all[idx:idx+1], all)[0]
        for other_idx, similarity_score in enumerate(similarity_matrix):
            other_episode_id = episode_mapping[other_idx]
            print episode_id, other_episode_id, similarity_score
            if episode_id != other_episode_id:
                session.run("""
                    MATCH (episode1:Episode {id: {episode1}}), (episode2:Episode {id: {episode2}})
                    MERGE (episode1)-[similarity:SIMILAR_TO]-(episode2)
                    ON CREATE SET similarity.score = {similarityScore}
                    """, {'episode1': episode_id, 'episode2': other_episode_id, 'similarityScore': similarity_score})
 
    session.close()

The episode_mapping dictionary is needed to map from episode ids to indices e.g. episode 1 is at index 0.

If we want to find the most similar pair of episodes in Game of Thrones we can execute the following query:

MATCH (episode1:Episode)-[similarity:SIMILAR_TO]-(episode2:Episode)
WHERE ID(episode1) > ID(episode2)
RETURN "S" + episode1.season + "E" + episode1.number AS ep1, 
       "S" + episode2.season + "E" + episode2.number AS ep2, 
       similarity.score AS score
ORDER BY similarity.score DESC
LIMIT 10
 
╒═════╤════╤══════════════════╕
│ep1  │ep2 │score             │
╞═════╪════╪══════════════════╡
│S1E2 │S1E1│0.6963730592072543│
├─────┼────┼──────────────────┤
│S1E4 │S1E3│0.6914173051223086│
├─────┼────┼──────────────────┤
│S1E9 │S1E8│0.6869464497590777│
├─────┼────┼──────────────────┤
│S2E10│S2E8│0.6869037302955034│
├─────┼────┼──────────────────┤
│S3E7 │S3E6│0.6819943394704735│
├─────┼────┼──────────────────┤
│S2E7 │S2E6│0.6813598225089799│
├─────┼────┼──────────────────┤
│S1E10│S1E9│0.6796436827080401│
├─────┼────┼──────────────────┤
│S1E5 │S1E4│0.6698105143372364│
├─────┼────┼──────────────────┤
│S1E10│S1E8│0.6624062584864754│
├─────┼────┼──────────────────┤
│S4E5 │S4E4│0.6518358737330705│
└─────┴────┴──────────────────┘

And the least popular?

MATCH (episode1:Episode)-[similarity:SIMILAR_TO]-(episode2:Episode)
WHERE ID(episode1) > ID(episode2)
RETURN "S" + episode1.season + "E" + episode1.number AS ep1, 
       "S" + episode2.season + "E" + episode2.number AS ep2, 
       similarity.score AS score
ORDER BY similarity.score
LIMIT 10
 
╒════╤════╤═══════════════════╕
│ep1 │ep2 │score              │
╞════╪════╪═══════════════════╡
│S4E9│S1E5│0                  │
├────┼────┼───────────────────┤
│S4E9│S1E6│0                  │
├────┼────┼───────────────────┤
│S4E9│S4E2│0                  │
├────┼────┼───────────────────┤
│S4E9│S2E9│0                  │
├────┼────┼───────────────────┤
│S4E9│S2E4│0                  │
├────┼────┼───────────────────┤
│S5E6│S4E9│0                  │
├────┼────┼───────────────────┤
│S6E8│S4E9│0                  │
├────┼────┼───────────────────┤
│S4E9│S4E6│0                  │
├────┼────┼───────────────────┤
│S3E9│S2E9│0.03181423814878889│
├────┼────┼───────────────────┤
│S4E9│S1E1│0.03582871819500093│
└────┴────┴───────────────────┘

The output of this query suggests that there are no common characters between 8 pairs of episodes which at first glance sounds surprising. Let’s write a query to check that finding:

MATCH (episode1:Episode)<-[:APPEARED_IN]-(character)-[:APPEARED_IN]->(episode2:Episode)
WHERE episode1.season = 4 AND episode1.number = 9 AND episode2.season = 1 AND episode2.number = 5
return episode1, episode2
 
(no changes, no rows)

It’s possible I made a mistake with the scraping of the data but from a quick look over the Wiki page I don’t think I have. I found it interesting that Season 4 Episode 9 shows up on 9 of the top 10 least similar pairs of episodes.

Next I’m going to cluster the episodes based on character appearances, but this post is long enough already so that’ll have to wait for another post another day.

Categories: Programming

SPaMCAST 408 – Kupe Kupersmith, Business Analysis and Agile

SPaMCAST Logo

http://www.spamcast.net

Listen Now
Subscribe on iTunes
Check out the podcast on Google Play Music

The Software Process and Measurement Cast 408 features our interview with Kupe Kupersmith. Kupe and I discussed the role of the business analyst in today’s dynamic environment.  It is critical to defining and facilitating the delivery of value. Weighty topics, but we also had a bit of fun.

“Kupe” Kupersmith, President, B2T Training, possesses over 18 years of experience in software systems development. He has served as the lead Business Analyst and Project Manager on projects in the energy, television and sports management and marketing industries. Additionally, he serves as a mentor for business analysis professionals. Kupe is the co-author of Business Analysis for Dummies, a Certified Business Analysis Professional (CBAP®) and a former IIBA® Board Member.

Kupe is a requested speaker and has presented at many conferences around the world. Being a trained improvisational comedian, Kupe is sure to make you laugh while you’re learning. For a feel for Kupe’s view on business analysis topics check out his blog on BA Times. Kupe is a connector and has a goal in life to meet everyone!

Contact Information

https://www.linkedin.com/in/kupetheba

https://www.b2ttraining.com/

Re-Read Saturday News

This week we continue our re-read of Kent Beck’s XP Explained, Second Edition with a discussion of Chapters 18 and 19.   Chapters 18 and 19 provide a view into two very different management philosophies that shaped software development in general and have had a major impact on XP.  Chapter 18 discusses Taylorism and scientific management; a management knows best view of the world. Chapter 19 talks about the Toyota Production System, which puts significant power back in the hands of the practitioner to deliver a quality product.

Use the link to XP Explained in the show notes when you buy your copy to read along to support both the blog and podcast. Visit the Software Process and Measurement Blog (www.tcagley.wordpress.com) to catch up on past installments of Re-Read Saturday.

Next, we are going to read The Five Dysfunctions of a Team by Jossey-Bass.  This will be a new book for me, therefore, an initial read, not a re-read!  Steven Adams suggested the book and it has been on my list for a few years! Click the link (The Five Dysfunctions of a Team), buy a copy and in a few weeks, we will begin to read the book together.

Next SPaMCAST

In the next Software Process and Measurement Cast, we will feature essay on whether a team is really one or two teams.  While the essay is a result of answering a friend’s question, the ideas in the essay can be applied when you are building any sort of team.

We will also have columns from Jeremy Berriault’s QA Corner and Jon M. Quigley’ column, “The Alpha-Omega of Product Development.”

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.

 


Categories: Process Management

SPaMCAST 408 - Kupe Kupersmith, Business Analysis and Agile

Software Process and Measurement Cast - Sun, 08/21/2016 - 22:00

The Software Process and Measurement Cast 408 features our interview with Kupe Kupersmith. Kupe and I discussed the role of the business analyst in today’s dynamic environment.  It is critical to defining and facilitating the delivery of value. Weighty topics, but we also had a bit of fun.

“Kupe” Kupersmith, President, B2T Training, possesses over 18 years of experience in software systems development. He has served as the lead Business Analyst and Project Manager on projects in the energy, television and sports management and marketing industries. Additionally, he serves as a mentor for business analysis professionals. Kupe is the co-author of Business Analysis for Dummies, a Certified Business Analysis Professional (CBAP®) and a former IIBA® Board Member.

Kupe is a requested speaker and has presented at many conferences around the world. Being a trained improvisational comedian, Kupe is sure to make you laugh while you’re learning. For a feel for Kupe’s view on business analysis topics check out his blog on BA Times. Kupe is a connector and has a goal in life to meet everyone!

Contact Information

https://www.linkedin.com/in/kupetheba

https://www.b2ttraining.com/

 

Re-Read Saturday News

This week we continue our re-read of Kent Beck’s XP Explained, Second Edition with a discussion of Chapters 18 and 19.   Chapters 18 and 19 provide a view into two very different management philosophies that shaped software development in general and have had a major impact on XP.  Chapter 18 discusses Taylorism and scientific management; a management knows best view of the world. Chapter 19 talks about the Toyota Production System, which puts significant power back in the hands of the practitioner to deliver a quality product.

Use the link to XP Explained in the show notes when you buy your copy to read along to support both the blog and podcast. Visit the Software Process and Measurement Blog (www.tcagley.wordpress.com) to catch up on past installments of Re-Read Saturday.

Next, we are going to read The Five Dysfunctions of a Team by Jossey-Bass.  This will be a new book for me, therefore, an initial read, not a re-read!  Steven Adams suggested the book and it has been on my list for a few years! Click the link (The Five Dysfunctions of a Team), buy a copy and in a few weeks, we will begin to read the book together.

 

Next SPaMCAST

In the next Software Process and Measurement Cast, we will feature essay on whether a team is really one or two teams.  While the essay is a result of answering a friend’s question, the ideas in the essay can be applied when you are building any sort of team.

We will also have columns from Jeremy Berriault’s QA Corner and Jon M. Quigley’ column, “The Alpha-Omega of Product Development.”

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.

 

Categories: Process Management

Protecting Android with more Linux kernel defenses

Android Developers Blog - Sun, 08/21/2016 - 21:53

Posted by Jeff Vander Stoep, Android Security team

Android relies heavily on the Linux kernel for enforcement of its security model. To better protect the kernel, we’ve enabled a number of mechanisms within Android. At a high level these protections are grouped into two categories—memory protections and attack surface reduction.

Memory protections

One of the major security features provided by the kernel is memory protection for userspace processes in the form of address space separation. Unlike userspace processes, the kernel’s various tasks live within one address space and a vulnerability anywhere in the kernel can potentially impact unrelated portions of the system’s memory. Kernel memory protections are designed to maintain the integrity of the kernel in spite of vulnerabilities.

Mark memory as read-only/no-execute

This feature segments kernel memory into logical sections and sets restrictive page access permissions on each section. Code is marked as read only + execute. Data sections are marked as no-execute and further segmented into read-only and read-write sections. This feature is enabled with config option CONFIG_DEBUG_RODATA. It was put together by Kees Cook and is based on a subset of Grsecurity’s KERNEXEC feature by Brad Spengler and Qualcomm’s CONFIG_STRICT_MEMORY_RWX feature by Larry Bassel and Laura Abbott. CONFIG_DEBUG_RODATA landed in the upstream kernel for arm/arm64 and has been backported to Android’s 3.18+ arm/arm64 common kernel.

Restrict kernel access to userspace

This feature improves protection of the kernel by preventing it from directly accessing userspace memory. This can make a number of attacks more difficult because attackers have significantly less control over kernel memory that is executable, particularly with CONFIG_DEBUG_RODATA enabled. Similar features were already in existence, the earliest being Grsecurity’s UDEREF. This feature is enabled with config option CONFIG_CPU_SW_DOMAIN_PAN and was implemented by Russell King for ARMv7 and backported to Android’s 4.1 kernel by Kees Cook.

Improve protection against stack buffer overflows

Much like its predecessor, stack-protector, stack-protector-strong protects against stack buffer overflows, but additionally provides coverage for more array types, as the original only protected character arrays. Stack-protector-strong was implemented by Han Shen and added to the gcc 4.9 compiler.

Attack surface reduction

Attack surface reduction attempts to expose fewer entry points to the kernel without breaking legitimate functionality. Reducing attack surface can include removing code, removing access to entry points, or selectively exposing features.

Remove default access to debug features

The kernel’s perf system provides infrastructure for performance measurement and can be used for analyzing both the kernel and userspace applications. Perf is a valuable tool for developers, but adds unnecessary attack surface for the vast majority of Android users. In Android Nougat, access to perf will be blocked by default. Developers may still access perf by enabling developer settings and using adb to set a property: “adb shell setprop security.perf_harden 0”.

The patchset for blocking access to perf may be broken down into kernel and userspace sections. The kernel patch is by Ben Hutchings and is derived from Grsecurity’s CONFIG_GRKERNSEC_PERF_HARDEN by Brad Spengler. The userspace changes were contributed by Daniel Micay. Thanks to Wish Wu and others for responsibly disclosing security vulnerabilities in perf.

Restrict app access to ioctl commands

Much of Android security model is described and enforced by SELinux. The ioctl() syscall represented a major gap in the granularity of enforcement via SELinux. Ioctl command whitelisting with SELinux was added as a means to provide per-command control over the ioctl syscall by SELinux.

Most of the kernel vulnerabilities reported on Android occur in drivers and are reached using the ioctl syscall, for example CVE-2016-0820. Some ioctl commands are needed by third-party applications, however most are not and access can be restricted without breaking legitimate functionality. In Android Nougat, only a small whitelist of socket ioctl commands are available to applications. For select devices, applications’ access to GPU ioctls has been similarly restricted.

Require seccomp-bpf

Seccomp provides an additional sandboxing mechanism allowing a process to restrict the syscalls and syscall arguments available using a configurable filter. Restricting the availability of syscalls can dramatically cut down on the exposed attack surface of the kernel. Since seccomp was first introduced on Nexus devices in Lollipop, its availability across the Android ecosystem has steadily improved. With Android Nougat, seccomp support is a requirement for all devices. On Android Nougat we are using seccomp on the mediaextractor and mediacodec processes as part of the media hardening effort.

Ongoing efforts

There are other projects underway aimed at protecting the kernel:

  • The Kernel Self Protection Project is developing runtime and compiler defenses for the upstream kernel.
  • Further sandbox tightening and attack surface reduction with SELinux is ongoing in AOSP.
  • Minijail provides a convenient mechanism for applying many containment and sandboxing features offered by the kernel, including seccomp filters and namespaces.
  • Projects like kasan and kcov help fuzzers discover the root cause of crashes and to intelligently construct test cases that increase code coverage—ultimately resulting in a more efficient bug hunting process.

Due to these efforts and others, we expect the security of the kernel to continue improving. As always, we appreciate feedback on our work and welcome suggestions for how we can improve Android. Contact us at security@android.com.

Categories: Programming

GTAC 2014 Wrap-up

Google Testing Blog - Sun, 08/21/2016 - 17:33
by Anthony Vallone on behalf of the GTAC Committee

On October 28th and 29th, GTAC 2014, the eighth GTAC (Google Test Automation Conference), was held at the beautiful Google Kirkland office. The conference was completely packed with presenters and attendees from all over the world (Argentina, Australia, Canada, China, many European countries, India, Israel, Korea, New Zealand, Puerto Rico, Russia, Taiwan, and many US states), bringing with them a huge diversity of experiences.


Speakers from numerous companies and universities (Adobe, American Express, Comcast, Dropbox, Facebook, FINRA, Google, HP, Medidata Solutions, Mozilla, Netflix, Orange, and University of Waterloo) spoke on a variety of interesting and cutting edge test automation topics.

All of the slides and video recordings are now available on the GTAC site. Photos will be available soon as well.


This was our most popular GTAC to date, with over 1,500 applicants and almost 200 of those for speaking. About 250 people filled our venue to capacity, and the live stream had a peak of about 400 concurrent viewers with 4,700 playbacks during the event. And, there was plenty of interesting Twitter and Google+ activity during the event.


Our goal in hosting GTAC is to make the conference highly relevant and useful for, not only attendees, but the larger test engineering community as a whole. Our post-conference survey shows that we are close to achieving that goal:



If you have any suggestions on how we can improve, please comment on this post.

Thank you to all the speakers, attendees, and online viewers who made this a special event once again. To receive announcements about the next GTAC, subscribe to the Google Testing Blog.

Categories: Testing & QA

GTAC 2015 Coming to Cambridge (Greater Boston) in November

Google Testing Blog - Sun, 08/21/2016 - 17:33
Posted by Anthony Vallone on behalf of the GTAC Committee


We are pleased to announce that the ninth GTAC (Google Test Automation Conference) will be held in Cambridge (Greatah Boston, USA) on November 10th and 11th (Toozdee and Wenzdee), 2015. So, tell everyone to save the date for this wicked good event.

GTAC is an annual conference hosted by Google, bringing together engineers from industry and academia to discuss advances in test automation and the test engineering computer science field. It’s a great opportunity to present, learn, and challenge modern testing technologies and strategies.

You can browse presentation abstracts, slides, and videos from previous years on the GTAC site.

Stay tuned to this blog and the GTAC website for application information and opportunities to present at GTAC. Subscribing to this blog is the best way to get notified. We're looking forward to seeing you there!

Categories: Testing & QA

GTAC 2015: Call for Proposals & Attendance

Google Testing Blog - Sun, 08/21/2016 - 17:32
Posted by Anthony Vallone on behalf of the GTAC Committee

The GTAC (Google Test Automation Conference) 2015 application process is now open for presentation proposals and attendance. GTAC will be held at the Google Cambridge office (near Boston, Massachusetts, USA) on November 10th - 11th, 2015.

GTAC will be streamed live on YouTube again this year, so even if you can’t attend in person, you’ll be able to watch the conference remotely. We will post the live stream information as we get closer to the event, and recordings will be posted afterward.

Speakers
Presentations are targeted at student, academic, and experienced engineers working on test automation. Full presentations are 30 minutes and lightning talks are 10 minutes. Speakers should be prepared for a question and answer session following their presentation.

Application
For presentation proposals and/or attendance, complete this form. We will be selecting about 25 talks and 200 attendees for the event. The selection process is not first come first serve (no need to rush your application), and we select a diverse group of engineers from various locations, company sizes, and technical backgrounds (academic, industry expert, junior engineer, etc).

Deadline
The due date for both presentation and attendance applications is August 10th, 2015.

Fees
There are no registration fees, but speakers and attendees must arrange and pay for their own travel and accommodations.

More information
You can find more details at developers.google.com/gtac.

Categories: Testing & QA

The Deadline to Apply for GTAC 2015 is Monday Aug 10

Google Testing Blog - Sun, 08/21/2016 - 17:32
Posted by Anthony Vallone on behalf of the GTAC Committee


The deadline to apply for GTAC 2015 is this Monday, August 10th, 2015. There is a great deal of interest to both attend and speak, and we’ve received many outstanding proposals. However, it’s not too late to submit your proposal for consideration. If you would like to speak or attend, be sure to complete the form by Monday.

We will be making regular updates to the GTAC site (developers.google.com/gtac/2015/) over the next several weeks, and you can find conference details there.

For those that have already signed up to attend or speak, we will contact you directly by mid-September.

Categories: Testing & QA

Announcing the GTAC 2015 Agenda

Google Testing Blog - Sun, 08/21/2016 - 17:31
by Anthony Vallone on behalf of the GTAC Committee 

We have completed the selection and confirmation of all speakers and attendees for GTAC 2015. You can find the detailed agenda at: developers.google.com/gtac/2015/schedule.

Thank you to all who submitted proposals!

There is a lot of interest in GTAC once again this year with about 1400 applicants and about 200 of those for speaking. Unfortunately, our venue only seats 250. We will livestream the event as usual, so fret not if you were not selected to attend. Information about the livestream and other details will be posted on the GTAC site soon and announced here.

Categories: Testing & QA

GTAC 2015 is Next Week!

Google Testing Blog - Sun, 08/21/2016 - 17:31
by Anthony Vallone on behalf of the GTAC Committee

The ninth GTAC (Google Test Automation Conference) commences on Tuesday, November 10th, at the Google Cambridge office. You can find the latest details on the conference site, including schedule, speaker profiles, and travel tips.

If you have not been invited to attend in person, you can watch the event live. And if you miss the livestream, we will post slides and videos later.

We have an outstanding speaker lineup this year, and we look forward to seeing you all there or online!

Categories: Testing & QA

The Inquiry Method for Test Planning

Google Testing Blog - Sun, 08/21/2016 - 17:30
by Anthony Vallone
updated: July 2016



Creating a test plan is often a complex undertaking. An ideal test plan is accomplished by applying basic principles of cost-benefit analysis and risk analysis, optimally balancing these software development factors:
  • Implementation cost: The time and complexity of implementing testable features and automated tests for specific scenarios will vary, and this affects short-term development cost.
  • Maintenance cost: Some tests or test plans may vary from easy to difficult to maintain, and this affects long-term development cost. When manual testing is chosen, this also adds to long-term cost.
  • Monetary cost: Some test approaches may require billed resources.
  • Benefit: Tests are capable of preventing issues and aiding productivity by varying degrees. Also, the earlier they can catch problems in the development life-cycle, the greater the benefit.
  • Risk: The probability of failure scenarios may vary from rare to likely, and their consequences may vary from minor nuisance to catastrophic.
Effectively balancing these factors in a plan depends heavily on project criticality, implementation details, resources available, and team opinions. Many projects can achieve outstanding coverage with high-benefit, low-cost unit tests, but they may need to weigh options for larger tests and complex corner cases. Mission critical projects must minimize risk as much as possible, so they will accept higher costs and invest heavily in rigorous testing at all levels.
This guide puts the onus on the reader to find the right balance for their project. Also, it does not provide a test plan template, because templates are often too generic or too specific and quickly become outdated. Instead, it focuses on selecting the best content when writing a test plan.

Test plan vs. strategy
Before proceeding, two common methods for defining test plans need to be clarified:
  • Single test plan: Some projects have a single "test plan" that describes all implemented and planned testing for the project.
  • Single test strategy and many plans: Some projects have a "test strategy" document as well as many smaller "test plan" documents. Strategies typically cover the overall test approach and goals, while plans cover specific features or project updates.
Either of these may be embedded in and integrated with project design documents. Both of these methods work well, so choose whichever makes sense for your project. Generally speaking, stable projects benefit from a single plan, whereas rapidly changing projects are best served by infrequently changed strategies and frequently added plans.
For the purpose of this guide, I will refer to both test document types simply as "test plans”. If you have multiple documents, just apply the advice below to your document aggregation.

Content selection
A good approach to creating content for your test plan is to start by listing all questions that need answers. The lists below provide a comprehensive collection of important questions that may or may not apply to your project. Go through the lists and select all that apply. By answering these questions, you will form the contents for your test plan, and you should structure your plan around the chosen content in any format your team prefers. Be sure to balance the factors as mentioned above when making decisions.

Prerequisites
  • Do you need a test plan? If there is no project design document or a clear vision for the product, it may be too early to write a test plan.
  • Has testability been considered in the project design? Before a project gets too far into implementation, all scenarios must be designed as testable, preferably via automation. Both project design documents and test plans should comment on testability as needed.
  • Will you keep the plan up-to-date? If so, be careful about adding too much detail, otherwise it may be difficult to maintain the plan.
  • Does this quality effort overlap with other teams? If so, how have you deduplicated the work?

Risk
  • Are there any significant project risks, and how will you mitigate them? Consider:
    • Injury to people or animals
    • Security and integrity of user data
    • User privacy
    • Security of company systems
    • Hardware or property damage
    • Legal and compliance issues
    • Exposure of confidential or sensitive data
    • Data loss or corruption
    • Revenue loss
    • Unrecoverable scenarios
    • SLAs
    • Performance requirements
    • Misinforming users
    • Impact to other projects
    • Impact from other projects
    • Impact to company’s public image
    • Loss of productivity
  • What are the project’s technical vulnerabilities? Consider:
    • Features or components known to be hacky, fragile, or in great need of refactoring
    • Dependencies or platforms that frequently cause issues
    • Possibility for users to cause harm to the system
    • Trends seen in past issues

Coverage
  • What does the test surface look like? Is it a simple library with one method, or a multi-platform client-server stateful system with a combinatorial explosion of use cases? Describe the design and architecture of the system in a way that highlights possible points of failure.
  • What platforms are supported? Consider listing supported operating systems, hardware, devices, etc. Also describe how testing will be performed and reported for each platform.
  • What are the features? Consider making a summary list of all features and describe how certain categories of features will be tested.
  • What will not be tested? No test suite covers every possibility. It’s best to be up-front about this and provide rationale for not testing certain cases. Examples: low risk areas that are a low priority, complex cases that are a low priority, areas covered by other teams, features not ready for testing, etc. 
  • What is covered by unit (small), integration (medium), and system (large) tests? Always test as much as possible in smaller tests, leaving fewer cases for larger tests. Describe how certain categories of test cases are best tested by each test size and provide rationale.
  • What will be tested manually vs. automated? When feasible and cost-effective, automation is usually best. Many projects can automate all testing. However, there may be good reasons to choose manual testing. Describe the types of cases that will be tested manually and provide rationale.
  • How are you covering each test category? Consider:
  • Will you use static and/or dynamic analysis tools? Both static analysis tools and dynamic analysis tools can find problems that are hard to catch in reviews and testing, so consider using them.
  • How will system components and dependencies be stubbed, mocked, faked, staged, or used normally during testing? There are good reasons to do each of these, and they each have a unique impact on coverage.
  • What builds are your tests running against? Are tests running against a build from HEAD (aka tip), a staged build, and/or a release candidate? If only from HEAD, how will you test release build cherry picks (selection of individual changelists for a release) and system configuration changes not normally seen by builds from HEAD?
  • What kind of testing will be done outside of your team? Examples:
    • Dogfooding
    • External crowdsource testing
    • Public alpha/beta versions (how will they be tested before releasing?)
    • External trusted testers
  • How are data migrations tested? You may need special testing to compare before and after migration results.
  • Do you need to be concerned with backward compatibility? You may own previously distributed clients or there may be other systems that depend on your system’s protocol, configuration, features, and behavior.
  • Do you need to test upgrade scenarios for server/client/device software or dependencies/platforms/APIs that the software utilizes?
  • Do you have line coverage goals?

Tooling and Infrastructure
  • Do you need new test frameworks? If so, describe these or add design links in the plan.
  • Do you need a new test lab setup? If so, describe these or add design links in the plan.
  • If your project offers a service to other projects, are you providing test tools to those users? Consider providing mocks, fakes, and/or reliable staged servers for users trying to test their integration with your system.
  • For end-to-end testing, how will test infrastructure, systems under test, and other dependencies be managed? How will they be deployed? How will persistence be set-up/torn-down? How will you handle required migrations from one datacenter to another?
  • Do you need tools to help debug system or test failures? You may be able to use existing tools, or you may need to develop new ones.

Process
  • Are there test schedule requirements? What time commitments have been made, which tests will be in place (or test feedback provided) by what dates? Are some tests important to deliver before others?
  • How are builds and tests run continuously? Most small tests will be run by continuous integration tools, but large tests may need a different approach. Alternatively, you may opt for running large tests as-needed. 
  • How will build and test results be reported and monitored?
    • Do you have a team rotation to monitor continuous integration?
    • Large tests might require monitoring by someone with expertise.
    • Do you need a dashboard for test results and other project health indicators?
    • Who will get email alerts and how?
    • Will the person monitoring tests simply use verbal communication to the team?
  • How are tests used when releasing?
    • Are they run explicitly against the release candidate, or does the release process depend only on continuous test results? 
    • If system components and dependencies are released independently, are tests run for each type of release? 
    • Will a "release blocker" bug stop the release manager(s) from actually releasing? Is there an agreement on what are the release blocking criteria?
    • When performing canary releases (aka % rollouts), how will progress be monitored and tested?
  • How will external users report bugs? Consider feedback links or other similar tools to collect and cluster reports.
  • How does bug triage work? Consider labels or categories for bugs in order for them to land in a triage bucket. Also make sure the teams responsible for filing and or creating the bug report template are aware of this. Are you using one bug tracker or do you need to setup some automatic or manual import routine?
  • Do you have a policy for submitting new tests before closing bugs that could have been caught?
  • How are tests used for unsubmitted changes? If anyone can run all tests against any experimental build (a good thing), consider providing a howto.
  • How can team members create and/or debug tests? Consider providing a howto.

Utility
  • Who are the test plan readers? Some test plans are only read by a few people, while others are read by many. At a minimum, you should consider getting a review from all stakeholders (project managers, tech leads, feature owners). When writing the plan, be sure to understand the expected readers, provide them with enough background to understand the plan, and answer all questions you think they will have - even if your answer is that you don’t have an answer yet. Also consider adding contacts for the test plan, so any reader can get more information.
  • How can readers review the actual test cases? Manual cases might be in a test case management tool, in a separate document, or included in the test plan. Consider providing links to directories containing automated test cases.
  • Do you need traceability between requirements, features, and tests?
  • Do you have any general product health or quality goals and how will you measure success? Consider:
    • Release cadence
    • Number of bugs caught by users in production
    • Number of bugs caught in release testing
    • Number of open bugs over time
    • Code coverage
    • Cost of manual testing
    • Difficulty of creating new tests


Categories: Testing & QA

Extreme Programming Explained, Second Edition: Re-Read Week 9 (Chapters 18 – 19)

XP Explained Cover

This week we continue with the re-read of Kent Beck and Cynthia Andres’s Extreme Programing Explained, Second Edition (2005) with two more chapters in Section Two.  Chapters 18 and 19 provide a view into two very different management philosophies that shaped software development in general and have had a major impact on XP.  Chapter 18 discusses Taylorism and scientific management; a management knows best view of the world. Chapter 19 talks about the Toyota Production System, which puts significant power back in the hands of the practitioner to deliver a quality product.

Chapter 18: Taylorism and Software

Somewhere in the dark ages when I was a senior in high school, I worked for Firestone Tire and Rubber. During my stint as a tire sorter and mold cleaner the time and motion people terrorized me and almost everyone at the plant.  They lurked behind the machines with clipboards and stopwatches to ensure all workers were following ‘the right way’ to do work. Cut to approximately four years later when I was a senior at Louisiana State University, I took several courses in industrial engineering.  Between both sets of experiences, I learned a lot about industrial engineering, scientific management, and Taylorism. Some of what I learned made sense for highly structured manufacturing plants, but very little makes sense for organizations writing and delivering software (although many people still try to apply these concepts).

Frederick Taylor led the efficiency movement of the 1890’s, culminating in his book The Principles of Scientific Management (1911). Scientific management suggests that there is ‘one best way’ that can be identified and measured to do any piece of work (hence the stopwatches). Scientific management continues to be used by many organizations from manufacturing to hospitals (at least according to my sister-in-law who is a nurse). It is hard to resist something named scientific management, even though scientific management tends to clash with the less regimented concepts found in the Agile and lean frameworks used in knowledge work.

Side Note: Beck points out the brilliance in naming “scientific management,” who would be in favor of the opposite, “unscientific management”? (The book notes that when picking descriptive names, it helps to pick a name whose opposite is unappealing, for example, the Patriot Act.)

Why the clash?  Scientific management was born in a manufacturing environment not software development environment. Taylor was focused on getting the most out workers that he felt had to lead and controlled closely.  In Taylor’s world, making steel or assembling cars were repeatable and predictable processes and the workers were cogs in the machine. Time and motion studies, which are a common tool in scientific management, run into problems based on several simplifying assumptions when applied to many types of work, including software development.  Beck points out three critical assumptions made by Taylor.

  1.   Things usually go according to plan as work moves through a repeatable process.
  2.   Improving individual steps leads optimization of the overall process.
  3.   People are mostly interchangeable and need to be told what to do.

Take a few minutes while you consider the simplifying assumptions as they are applied to writing, testing and delivering functionality, and then stop laughing.

While very few enlightened CIOs would admit to being adherents of Taylor, many would describe their “shop” as a software factory and actively leverage at least some of the tenets of scientific management.  The practice of social engineering developed by Taylor and his followers is built into the role specialization model of IT that is nearly ubiquitous. One form of social engineering practiced on the factory floor even today is the separation of workers from planners, which translates into software development as separating estimators and project managers from developers and testers in today’s IT organization. The planners and estimators decide how and how long the workers will take to deliver and a piece of work. Developers are considered the 21st-century cogs in the machine that work at the pace specified by the planners and estimators.  Every software development framework decries this practice and yet it still exists.  Similarly, Beck points out that creating a separate quality department is another form of social engineering.  The separation of the quality function ensures the workers are working to the correct quality by checking at specific points in the process flow.  In every case, separating activities generates bottlenecks and constraints, and potentially makes each group the enemy of each other. Once upon a time I heard a group of developers mention that they had completed development, but the testers caused the project to be late.  This is a reflection of Taylorism and social engineering.

Chapter 19: Toyota

The Toyota Production System (TPS) is an alternative to Taylorism.  Much has been written about TPS, including several books by Tom and Mary Poppendiech that pioneered applying TPS to software development and maintenance. In TPS, each worker is responsible for the whole production line rather than a single function. One of the goals of each step in the process is to make the quality of the production line high enough that downstream quality assurance is not needed.

In the TPS there is less social stratification and less need to perform independent checking. Less independent checking is needed becasue workers feel accountable for their work because it will immediately used by the next step process. In software development, a developer writes code and tests code that forms the basis for the future stories. A developer in an organization using the TPS can’t hide if they deliver poor quality and will be subject to peer pressure clean up their act and deliver good quality.

Beck caps the chapter with a reminder of the time value of money.  Making anything and then not using it immediately so that you generate feedback causes the informational value to evaporate. Quick feedback is one of the reasons why short iterations and quick feedback generates more value than classic waterfall.

Previous installments of Extreme Programing Explained, Second Edition (2005) on Re-read Saturday:

Extreme Programming Explained: Embrace Change Second Edition Week 1, Preface and Chapter 1

Week 2, Chapters 2 – 3

Week 3, Chapters 4 – 5

Week 4, Chapters 6 – 7  

Week 5, Chapters 8 – 9

Week 6, Chapters 10 – 11

Week 7, Chapters 12 – 13

Week 8, Chapters 14 – 15

Week 9, Chapters 16 – 17

A few quick notes. We are going to read The Five Dysfunctions of a Team by Jossey-Bass .  This will be a new book for me, therefore, an initial read (I have not read this book yet), not a re-read!  Steven Adams suggested the book and it has been on my list for a few years! Click the link (The Five Dysfunctions of a Team), buy a copy and in a few weeks, we will begin to read the book together.

 

 


Categories: Process Management

Stuff The Internet Says On Scalability For August 19th, 2016

Hey, it's HighScalability time:

 


Modern art? Nope. Pancreatic cancer revealed by fluorescent labeling.

 

If you like this sort of Stuff then please support me on Patreon.
  • 4: SpaceX rocket landings at sea; 32TB: 3D Vertical NAND Flash; 10x: compute power for deep learning as the best of today’s GPUs; 87%: of vehicles could go electric without any range problems; 06%: visitors that post comments on NPR; 235k: terrorism related Twitter accounts closed; 40%: AMD improvement in instructions per clock for Zen; 15%: apps are slower is summer because of humidity;

  • Quotable Quotes:
    • @netik: There is no Internet of Things. There are only many unpatched, vulnerable small computers on the Internet.
    • @Pinboard: The Programmers’ Credo: we do these things not because they are easy, but because we thought they were going to be easy
    • Aphyr: This advantage is not shared by sequential consistency, or its multi-object cousin, serializability. This much, I knew–but Herlihy & Wing go on to mention, almost offhand, that strict serializability is also nonlocal!
    • @PHP_CEO: I’VE HAD AN IDEA / WE’LL TAKE ALL THE BAD CODE / BUNDLE IT TOGETHER / AND SELL IT TO VCS AS A COLLATERALIZED TECHNICAL DEBT OBLIGATION
    • felixgallo: I agree, the actor model is a significantly more usable metaphor for containers than functions. When you start thinking about supervisor trees, you start heading towards Kubernetes, which is interesting.
    • David Rosenthal: So in practice blockchains are decentralized (not), anonymous (not and not), immutable (not), secure (not), fast (not) and cheap (not). What's (not) to like?
    • @grimmelm: You know, you can’t spell “idiotic” without “IoT”
    • @jroper: 10 years ago, backends were monolithic services and frontends many pages. Now frontends are monolithic pages and backends many services.
    • @jakevoytko: Ordinary human: Hey, this is a fork. You can eat with it! People who comment on programming blogs: You can't eat soup with that.
    • iLoch: Wow $5000/mo for 2000rps, just for the application servers? That's absurd. I think we're paying around $2000/mo for our app servers, a database which is over 2TB in size, and we ingest about 10 megabytes of text data per second, on top of a couple thousand requests per second to the user facing application.
    • @josh_wills: I'm thinking about writing a book on data engineering for kids: "An Immutable, Append-Only Log of Unfortunate Events"
    • Kill Process: What the world needs is not a new social network that concentrates power in a single place, but a design to intrinsically prevent the concentration of power that results in barriers to switching.
    • ljmasternoob: the bump was just Schrödinger's cat stepping on Occam's razor.
    • carsongross: The JVM is a treasure just sitting there waiting to be rediscovered.
    • @mjpt777: When @nitsanw points out some of what he finds in the JVM I often end up crying :(
    • @karpathy: I hoped TensorFlow would standardize our code but it's low level so we've diverged on layers over it: Slim, PrettyTensor, Keras, TFLearn ...
    • @rbranson:  coordination is a scaling bottleneck in teams as much as it is in distributed systems.
    • @mathiasverraes: There are only two hard problems in distributed systems:  2. Exactly-once delivery 1. Guaranteed order of messages 2. Exactly-once delivery
    • @PhilDarnowsky: I've been using dynamically typed languages for a living for a decade. As a result, I prefer statically typed languages.
    • Allyn Malventano: 64-Layer is Samsung's 4th generation of V-NAND. We've seen 48-Layer and 32-Layer, but few know that 24-Layer was a thing (but was mainly in limited enterprise parts).
    • @cmeik: "It's a bit odd to me that programming languages today only give you the ability to write something that runs on one machine..." [1/2]
    • @trengriffin: @amcafee Use of higher radio frequencies will require a lot more antennas creating ever smaller coverage areas. More heterogeneous bandwidth
    • @jamesurquhart: Disagree IaaS multicloud tools will play major role moving forward. Game is in PaaS and app deployment (containers).

  • Linking it all together on a great episode of This Week In Tech. Google’s new OS, Fuchsia, for places where Android fears to tread, smaller, lower power IoT type devices. Intel Optane is an almost shipping non-volatile memory that is 1000X faster than SSD (maybe not), has up to 10X the capacity of DRAM, while only being a few X slower than typical DRAM, is perfect for converged IoT devices. Say goodbye to blocks and memory tiers. IoT devices don't have to be fast, so DRAM can be replaced with this new memory, hopefully making simpler cheaper devices that can last a decade on a small battery, especially when combined with low power ARM CPUsNVMe is replacing SATA and AHCI for higher bandwidth, lower latency access to non-volatile memory. 5g, when it comes out, will specifically support billions of low power IoT devices. Machine learning ties everything together. That future that is full of sensors may actually happen. As Greg Ferro said~ We are starting to see the convergence of multiple advances. You can start to plot a pathway forward to see where the disruption occurs. The irony, still, is nothing will work together. We have ubiquitous wifi more from a fluke of history than any conscious design. We see how when left up to industry the silo mindset captures all reason, and we are all the poorer for it.

  • We have water rights. Mineral rights. Surface rights. Is there such a thing as virtual property rights? Do you own the virtual property rights of your own property when someone else decides to use it in an application? Pokemon GO Hit With Class Action LawsuitWhy do people keep coming to this couple’s home looking for lost phones?

  • As data becomes more valuable that we are the product becomes assumed. Provider of Personal Finance Tools Tracks Bank Cards, Sells Data to Investors: Yodlee has another way of making money: The company sells some of the data it gathers from credit- and debit-card transactions to investors and research firms...Yodlee can tell you down to the day how much the water bill was across 25,000 citizens of San Francisco” or the daily spending at McDonald’s throughout the country...The details are so valuable that some investment firms have paid more than $2 million apiece for an annual subscription to Yodlee’s service.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Two Teams or Not: Courage and Flexibility

You got to have courage!

You got to have courage!

I listen to Malcolm Gladwell’s wonderful Revisionist History podcast. In the last podcast of season one, he discussed “the satire paradox.” The punchline of this installment of the podcast is that change is not possible without courage. Flexibility requires courage.  Change, when embracing something like Agile, requires the flexibility to give something up.  Perhaps we might be asked to move outside of our comfort zone and work differently, or to work with people we haven’t worked with before.  Asking testers and developers to work on the same team or to work as pairs or asking backend and UI subteams to work together require flexibility.  We can define flexibility to embrace Agile or any other significant modification to work based on four basic attributes:

  1. Ability to accept changing priorities – Agile projects or products are based on a  backlog that encompasses the prioritized list of work for team(s).  This backlog will evolve based on knowledge and feedback. The evolution of the backlog includes changes to the items on the list and the priority of the items on the list. All team members, whether a developer, business analyst or tester need to accept that what we planned on doing in the next sprint might not be what we originally thought.  
  2. Ability to accept changing roles and workload – Self-directed and self-managed teams make decisions that affect who does what and when.  Each team member needs to accept that they might be asked (or need to volunteer) to do whatever is needed for the team to be successful.  Adopting concepts such as specializing generalists or T-shaped people are a direct reflection of the need for flexibility.
  3. Ability to adapt to changing environments – Business and technical architectures change over time. Architectures are a reflection of how someone (a team or an architect) perceives the environment at a specific moment in time that perception as the team gathers knowledge and elicits feedback from the functional code they deliver.Implementing the adage that developers should “run towards feedback” requires courage and flexibility.
  4. Ability to persist – Any process change requires doing something different, which is often scary or uncomfortable even if it is only briefly. If we give up immediately at the first sign of unease nothing would ever change, even if the data says that staying the course will be good.  For example, the first day at all six universities that I attended was full of stress (I remember once even having the dream that I could not find my classes). I was able to find the courage to persist and push through that unease in order to make the change and find a seat in the back of room in each class.

When I was asked whether two teams were really one team or whether they should find a way to work together, the answers have been premised on the assumption that they had the courage or the flexibility to change. The discussion of courage and flexibility is really less about Agile techniques, but rather a change management issue. A test of whether courage and flexibility are basic issues can be as simple as listening to team members comments.  If you hear comments such as “we have always done it that way” or “why can’t we do it the way we used to?”, then leaders and influencers need to assess whether a team or individual has the courage and flexibility to change.  If they do not have the flexibility and courage needed, leaders and coaches need to help develop the environment where courage and flexibility can develop before any specific process framework or technique can be successful.

Changing how people work is difficult because most people only choose to change if they see a greater benefit/ pain avoidance than the pain of not making the change. Flexibility is a set of abilities help individuals and teams to make a choice and then establishing a commitment to that choice so that change happens.


Categories: Process Management