Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Neo4j: Cypher – Avoiding the Eager

Mark Needham - Thu, 10/23/2014 - 06:56

Although I love how easy Cypher’s LOAD CSV command makes it to get data into Neo4j, it currently breaks the rule of least surprise in the way it eagerly loads in all rows for some queries even those using periodic commit.

Neverwhere

Beware of the eager pipe

This is something that my colleague Michael noted in the second of his blog posts explaining how to use LOAD CSV successfully:

The biggest issue that people ran into, even when following the advice I gave earlier, was that for large imports of more than one million rows, Cypher ran into an out-of-memory situation.

That was not related to commit sizes, so it happened even with PERIODIC COMMIT of small batches.

I recently spent a few days importing data into Neo4j on a Windows machine with 4GB RAM so I was seeing this problem even earlier than Michael suggested.

Michael explains how to work out whether your query is suffering from unexpected eager evaluation:

If you profile that query you see that there is an “Eager” step in the query plan.

That is where the “pull in all data” happens.

You can profile queries by prefixing the word ‘PROFILE’. You’ll need to run your query in the console of /webadmin in your web browser or with the Neo4j shell.

I did this for my queries and was able to identify query patterns which get evaluated eagerly and in some cases we can work around it.

We’ll use the Northwind data set to demonstrate how the Eager pipe can sneak into our queries but keep in mind that this data set is sufficiently small to not cause issues.

This is what a row in the file looks like:

$ head -n 2 data/customerDb.csv
OrderID,CustomerID,EmployeeID,OrderDate,RequiredDate,ShippedDate,ShipVia,Freight,ShipName,ShipAddress,ShipCity,ShipRegion,ShipPostalCode,ShipCountry,CustomerID,CustomerCompanyName,ContactName,ContactTitle,Address,City,Region,PostalCode,Country,Phone,Fax,EmployeeID,LastName,FirstName,Title,TitleOfCourtesy,BirthDate,HireDate,Address,City,Region,PostalCode,Country,HomePhone,Extension,Photo,Notes,ReportsTo,PhotoPath,OrderID,ProductID,UnitPrice,Quantity,Discount,ProductID,ProductName,SupplierID,CategoryID,QuantityPerUnit,UnitPrice,UnitsInStock,UnitsOnOrder,ReorderLevel,Discontinued,SupplierID,SupplierCompanyName,ContactName,ContactTitle,Address,City,Region,PostalCode,Country,Phone,Fax,HomePage,CategoryID,CategoryName,Description,Picture
10248,VINET,5,1996-07-04,1996-08-01,1996-07-16,3,32.38,Vins et alcools Chevalier,59 rue de l'Abbaye,Reims,,51100,France,VINET,Vins et alcools Chevalier,Paul Henriot,Accounting Manager,59 rue de l'Abbaye,Reims,,51100,France,26.47.15.10,26.47.15.11,5,Buchanan,Steven,Sales Manager,Mr.,1955-03-04,1993-10-17,14 Garrett Hill,London,,SW1 8JR,UK,(71) 555-4848,3453,\x,"Steven Buchanan graduated from St. Andrews University, Scotland, with a BSC degree in 1976.  Upon joining the company as a sales representative in 1992, he spent 6 months in an orientation program at the Seattle office and then returned to his permanent post in London.  He was promoted to sales manager in March 1993.  Mr. Buchanan has completed the courses ""Successful Telemarketing"" and ""International Sales Management.""  He is fluent in French.",2,http://accweb/emmployees/buchanan.bmp,10248,11,14,12,0,11,Queso Cabrales,5,4,1 kg pkg.,21,22,30,30,0,5,Cooperativa de Quesos 'Las Cabras',Antonio del Valle Saavedra,Export Administrator,Calle del Rosal 4,Oviedo,Asturias,33007,Spain,(98) 598 76 54,,,4,Dairy Products,Cheeses,\x
MERGE, MERGE, MERGE

The first thing we want to do is create a node for each employee and each order and then create a relationship between them.

We might start with the following query:

USING PERIODIC COMMIT 1000
LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-northwind/data/customerDb.csv" AS row
MERGE (employee:Employee {employeeId: row.EmployeeID})
MERGE (order:Order {orderId: row.OrderID})
MERGE (employee)-[:SOLD]->(order)

This does the job but if we profile the query like so…

PROFILE LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-northwind/data/customerDb.csv" AS row
WITH row LIMIT 0
MERGE (employee:Employee {employeeId: row.EmployeeID})
MERGE (order:Order {orderId: row.OrderID})
MERGE (employee)-[:SOLD]->(order)

…we’ll notice an ‘Eager’ lurking on the third line:

==> +----------------+------+--------+----------------------------------+-----------------------------------------+
==> |       Operator | Rows | DbHits |                      Identifiers |                                   Other |
==> +----------------+------+--------+----------------------------------+-----------------------------------------+
==> |    EmptyResult |    0 |      0 |                                  |                                         |
==> | UpdateGraph(0) |    0 |      0 |    employee, order,   UNNAMED216 |                            MergePattern |
==> |          Eager |    0 |      0 |                                  |                                         |
==> | UpdateGraph(1) |    0 |      0 | employee, employee, order, order | MergeNode; :Employee; MergeNode; :Order |
==> |          Slice |    0 |      0 |                                  |                            {  AUTOINT0} |
==> |        LoadCSV |    1 |      0 |                              row |                                         |
==> +----------------+------+--------+----------------------------------+-----------------------------------------+

You’ll notice that when we profile each query we’re stripping off the periodic commit section and adding a ‘WITH row LIMIT 0′. This allows us to generate enough of the query plan to identify the ‘Eager’ operator without actually importing any data.

We want to split that query into two so it can be processed in a non eager manner:

USING PERIODIC COMMIT 1000
LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-northwind/data/customerDb.csv" AS row
WITH row LIMIT 0
MERGE (employee:Employee {employeeId: row.EmployeeID})
MERGE (order:Order {orderId: row.OrderID})
==> +-------------+------+--------+----------------------------------+-----------------------------------------+
==> |    Operator | Rows | DbHits |                      Identifiers |                                   Other |
==> +-------------+------+--------+----------------------------------+-----------------------------------------+
==> | EmptyResult |    0 |      0 |                                  |                                         |
==> | UpdateGraph |    0 |      0 | employee, employee, order, order | MergeNode; :Employee; MergeNode; :Order |
==> |       Slice |    0 |      0 |                                  |                            {  AUTOINT0} |
==> |     LoadCSV |    1 |      0 |                              row |                                         |
==> +-------------+------+--------+----------------------------------+-----------------------------------------+

Now that we’ve created the employees and orders we can join them together:

USING PERIODIC COMMIT 1000
LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-northwind/data/customerDb.csv" AS row
MATCH (employee:Employee {employeeId: row.EmployeeID})
MATCH (order:Order {orderId: row.OrderID})
MERGE (employee)-[:SOLD]->(order)
==> +----------------+------+--------+-------------------------------+-----------------------------------------------------------+
==> |       Operator | Rows | DbHits |                   Identifiers |                                                     Other |
==> +----------------+------+--------+-------------------------------+-----------------------------------------------------------+
==> |    EmptyResult |    0 |      0 |                               |                                                           |
==> |    UpdateGraph |    0 |      0 | employee, order,   UNNAMED216 |                                              MergePattern |
==> |      Filter(0) |    0 |      0 |                               |          Property(order,orderId) == Property(row,OrderID) |
==> | NodeByLabel(0) |    0 |      0 |                  order, order |                                                    :Order |
==> |      Filter(1) |    0 |      0 |                               | Property(employee,employeeId) == Property(row,EmployeeID) |
==> | NodeByLabel(1) |    0 |      0 |            employee, employee |                                                 :Employee |
==> |          Slice |    0 |      0 |                               |                                              {  AUTOINT0} |
==> |        LoadCSV |    1 |      0 |                           row |                                                           |
==> +----------------+------+--------+-------------------------------+-----------------------------------------------------------+

Not an Eager in sight!

MATCH, MATCH, MATCH, MERGE, MERGE

If we fast forward a few steps we may now have refactored our import script to the point where we create our nodes in one query and the relationships in another query.

Our create query works as expected:

USING PERIODIC COMMIT 1000
LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-northwind/data/customerDb.csv" AS row
MERGE (employee:Employee {employeeId: row.EmployeeID})
MERGE (order:Order {orderId: row.OrderID})
MERGE (product:Product {productId: row.ProductID})
==> +-------------+------+--------+----------------------------------------------------+--------------------------------------------------------------+
==> |    Operator | Rows | DbHits |                                        Identifiers |                                                        Other |
==> +-------------+------+--------+----------------------------------------------------+--------------------------------------------------------------+
==> | EmptyResult |    0 |      0 |                                                    |                                                              |
==> | UpdateGraph |    0 |      0 | employee, employee, order, order, product, product | MergeNode; :Employee; MergeNode; :Order; MergeNode; :Product |
==> |       Slice |    0 |      0 |                                                    |                                                 {  AUTOINT0} |
==> |     LoadCSV |    1 |      0 |                                                row |                                                              |
==> +-------------+------+--------+----------------------------------------------------+------------------------------------------------------------

We’ve now got employees, products and orders in the graph. Now let’s create relationships between the trio:

USING PERIODIC COMMIT 1000
LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-northwind/data/customerDb.csv" AS row
MATCH (employee:Employee {employeeId: row.EmployeeID})
MATCH (order:Order {orderId: row.OrderID})
MATCH (product:Product {productId: row.ProductID})
MERGE (employee)-[:SOLD]->(order)
MERGE (order)-[:PRODUCT]->(product)

If we profile that we’ll notice Eager has sneaked in again!

==> +----------------+------+--------+-------------------------------+-----------------------------------------------------------+
==> |       Operator | Rows | DbHits |                   Identifiers |                                                     Other |
==> +----------------+------+--------+-------------------------------+-----------------------------------------------------------+
==> |    EmptyResult |    0 |      0 |                               |                                                           |
==> | UpdateGraph(0) |    0 |      0 |  order, product,   UNNAMED318 |                                              MergePattern |
==> |          Eager |    0 |      0 |                               |                                                           |
==> | UpdateGraph(1) |    0 |      0 | employee, order,   UNNAMED287 |                                              MergePattern |
==> |      Filter(0) |    0 |      0 |                               |    Property(product,productId) == Property(row,ProductID) |
==> | NodeByLabel(0) |    0 |      0 |              product, product |                                                  :Product |
==> |      Filter(1) |    0 |      0 |                               |          Property(order,orderId) == Property(row,OrderID) |
==> | NodeByLabel(1) |    0 |      0 |                  order, order |                                                    :Order |
==> |      Filter(2) |    0 |      0 |                               | Property(employee,employeeId) == Property(row,EmployeeID) |
==> | NodeByLabel(2) |    0 |      0 |            employee, employee |                                                 :Employee |
==> |          Slice |    0 |      0 |                               |                                              {  AUTOINT0} |
==> |        LoadCSV |    1 |      0 |                           row |                                                           |
==> +----------------+------+--------+-------------------------------+-----------------------------------------------------------+

In this case the Eager happens on our second call to MERGE as Michael identified in his post:

The issue is that within a single Cypher statement you have to isolate changes that affect matches further on, e.g. when you CREATE nodes with a label that are suddenly matched by a later MATCH or MERGE operation.

We can work around the problem in this case by having separate queries to create the relationships:

LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-northwind/data/customerDb.csv" AS row
MATCH (employee:Employee {employeeId: row.EmployeeID})
MATCH (order:Order {orderId: row.OrderID})
MERGE (employee)-[:SOLD]->(order)
==> +----------------+------+--------+-------------------------------+-----------------------------------------------------------+
==> |       Operator | Rows | DbHits |                   Identifiers |                                                     Other |
==> +----------------+------+--------+-------------------------------+-----------------------------------------------------------+
==> |    EmptyResult |    0 |      0 |                               |                                                           |
==> |    UpdateGraph |    0 |      0 | employee, order,   UNNAMED236 |                                              MergePattern |
==> |      Filter(0) |    0 |      0 |                               |          Property(order,orderId) == Property(row,OrderID) |
==> | NodeByLabel(0) |    0 |      0 |                  order, order |                                                    :Order |
==> |      Filter(1) |    0 |      0 |                               | Property(employee,employeeId) == Property(row,EmployeeID) |
==> | NodeByLabel(1) |    0 |      0 |            employee, employee |                                                 :Employee |
==> |          Slice |    0 |      0 |                               |                                              {  AUTOINT0} |
==> |        LoadCSV |    1 |      0 |                           row |                                                           |
==> +----------------+------+--------+-------------------------------+-----------------------------------------------------------+
USING PERIODIC COMMIT 1000
LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-northwind/data/customerDb.csv" AS row
MATCH (order:Order {orderId: row.OrderID})
MATCH (product:Product {productId: row.ProductID})
MERGE (order)-[:PRODUCT]->(product)
==> +----------------+------+--------+------------------------------+--------------------------------------------------------+
==> |       Operator | Rows | DbHits |                  Identifiers |                                                  Other |
==> +----------------+------+--------+------------------------------+--------------------------------------------------------+
==> |    EmptyResult |    0 |      0 |                              |                                                        |
==> |    UpdateGraph |    0 |      0 | order, product,   UNNAMED229 |                                           MergePattern |
==> |      Filter(0) |    0 |      0 |                              | Property(product,productId) == Property(row,ProductID) |
==> | NodeByLabel(0) |    0 |      0 |             product, product |                                               :Product |
==> |      Filter(1) |    0 |      0 |                              |       Property(order,orderId) == Property(row,OrderID) |
==> | NodeByLabel(1) |    0 |      0 |                 order, order |                                                 :Order |
==> |          Slice |    0 |      0 |                              |                                           {  AUTOINT0} |
==> |        LoadCSV |    1 |      0 |                          row |                                                        |
==> +----------------+------+--------+------------------------------+--------------------------------------------------------+
MERGE, SET

I try to make LOAD CSV scripts as idempotent as possible so that if we add more rows or columns of data to our CSV we can rerun the query without having to recreate everything.

This can lead you towards the following pattern where we’re creating suppliers:

USING PERIODIC COMMIT 1000
LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-northwind/data/customerDb.csv" AS row
MERGE (supplier:Supplier {supplierId: row.SupplierID})
SET supplier.companyName = row.SupplierCompanyName

We want to ensure that there’s only one Supplier with that SupplierID but we might be incrementally adding new properties and decide to just replace everything by using the ‘SET’ command. If we profile that query, the Eager lurks:

==> +----------------+------+--------+--------------------+----------------------+
==> |       Operator | Rows | DbHits |        Identifiers |                Other |
==> +----------------+------+--------+--------------------+----------------------+
==> |    EmptyResult |    0 |      0 |                    |                      |
==> | UpdateGraph(0) |    0 |      0 |                    |          PropertySet |
==> |          Eager |    0 |      0 |                    |                      |
==> | UpdateGraph(1) |    0 |      0 | supplier, supplier | MergeNode; :Supplier |
==> |          Slice |    0 |      0 |                    |         {  AUTOINT0} |
==> |        LoadCSV |    1 |      0 |                row |                      |
==> +----------------+------+--------+--------------------+----------------------+

We can work around this at the cost of a bit of duplication using ‘ON CREATE SET’ and ‘ON MATCH SET':

USING PERIODIC COMMIT 1000
LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-northwind/data/customerDb.csv" AS row
MERGE (supplier:Supplier {supplierId: row.SupplierID})
ON CREATE SET supplier.companyName = row.SupplierCompanyName
ON MATCH SET supplier.companyName = row.SupplierCompanyName
==> +-------------+------+--------+--------------------+----------------------+
==> |    Operator | Rows | DbHits |        Identifiers |                Other |
==> +-------------+------+--------+--------------------+----------------------+
==> | EmptyResult |    0 |      0 |                    |                      |
==> | UpdateGraph |    0 |      0 | supplier, supplier | MergeNode; :Supplier |
==> |       Slice |    0 |      0 |                    |         {  AUTOINT0} |
==> |     LoadCSV |    1 |      0 |                row |                      |
==> +-------------+------+--------+--------------------+----------------------+

With the data set I’ve been working with I was able to avoid OutOfMemory exceptions in some cases and reduce the amount of time it took to run the query by a factor of 3 in others.

As time goes on I expect all of these scenarios will be addressed but as of Neo4j 2.1.5 these are the patterns that I’ve identified as being overly eager.

If you know of any others do let me know and I can add them to the post or write a second part.

Categories: Programming

How to Be a 10x Better Speaker with 20 Benefits

NOOP.NL - Jurgen Appelo - Thu, 10/23/2014 - 04:37
speaking

Last week, one conference attendee told me my presentation at Agile Tour Toulouse was perfect.

It was very kind, but I didn’t believe her.

In five years, I have spoken at almost 100 conferences and joined a similar number of community and company events. Thanks to the many discussions I had with event organizers, I think I now understand how to be a more valuable speaker.

The post How to Be a 10x Better Speaker with 20 Benefits appeared first on NOOP.NL.

Categories: Project Management

The Metrics Minute: Velocity

Zoom!

Zoom!

Audio Version:  Software Process and Measurement Cast 119.

Definition:

The simple definition of velocity is the amount of work that is completed in a period of time (typically a sprint). The definition is related to productivity, which is the amount of effort required to complete a unit of work and delivery rate (the speed that the work is completed).  The inclusion of a time box (the sprint) creates a fixed duratio,n which transforms velocity into more of a productivity metric than a speed metric (how much work can be be done in a specific timescale by a specific team). Therefore to truly measure velocity you need to estimate the units of work completed, have a definition of complete and have a time box.

The definition of done in Agile is typically functional code, however I think the definition can be stretched to reflect the terminal deliverable the sprint team has committed to create and based on the definition of done (for example requirements for a sprint team working on requirements or completed test cases in a test sprint) that the team has established.

Many Agile projects use the concept of story points as a metaphor for size or functional code. Note other functional size measures can be just as easily used. Examples in this paper will use story points as a unit of measure.  What is not size however is effort or duration. Effort is an input that is consumed while transforming ideas into functional code. The amount of effort required for the transformation is a reflection of size, complexity and other factors. Duration like effort is consumed by a sprint not created therefore does not measure what is delivered.

Formula

To calculate velocity, simply add up the size estimates of the features (user stories, requirements, backlog items, etc.) successfully delivered in an iteration.  The use of the size estimates allows the team to distinguish between items of differing levels of granularity.  Successfully delivered should equate to the definition of done.

Velocity = Story Points Completed Per Sprint

And:

Average velocity = Average Number of Story Points Per Sprint

The formula becomes more complex if staffing varies between sprints (and potentially less valuable as a predictive measure).  In order to account for variable staffing the velocity formula would have to be modified as follows:

velocity per person = sum (size of completed features in a sprint / number of people) / number of sprints or observations

To be really precise (not necessarily more accurate) we would have to understand the variability of the data as variability would help define level of confidence.  Variability generated by differences in team member capabilities is one of the reasons that predicability is enhanced by team stability. As you can see, the more complex the environmental scenario becomes, the less simple the math must be to describe the scenario.

Uses:

Velocity is used as a tool in project planning and reporting. Velocity is used in planning to predict how much work will be completed in a sprint and in reporting to communicate what has been done.

When used for planning and estimation the team’s velocity is used along with a prioritized set of granular features (e.g., user stories, backlog items, requirements, etc.) that have been sized or estimated.  The team uses these factors to select what can be done in the upcoming sprint. When the sprint is complete the results are used to update velocity for the next sprint. This is a top down estimation process using historical data.

Over a number of sprints velocity can be used both as a macro planning tool (when will will the project be done) and a reporting tool (we planned at this velocity and are delivering at this velocity).

Velocity can be used in all methodologies and because it is team specific, it is agnostic in terms of units of size.

Issues

As with all metrics, velocity has it’s share of issues.

The first is that there is an expectation of team stability inherent in the metric. Velocity is impacted by team size and composition and without collecting additional attributes and correlating these attributes to performance, change is not predictable (except by gut feel or Ouija Board). There should always be notes kept on team size and capability so that you can understand your data over time.

Similarly team dynamics change over time, sometimes radically. Radical changes in  team dynamics will affect velocity. Note shocks to any system of work are apt to create the same issue. Measurement personnel, SCRUM masters and team leaders need to be aware of people’s personalities and how they change over time.

The first time application of velocity requires either historical data of other similar teams and projects or an estimate. In a perfect world a few sprints would be executed and data gathered before expectations are set however generally clients want an idea of if a project will be completed, when it will be completed and the functions that will be delivered along the way.  Estimates of velocity based on the teams knowledge of the past or other crowd sourcing techniques are relatively safe starting points assuming continuos recalibration.

The final issue is the requirement for a good definition of done. Done is a concept that has been driven home in the agile community. To quote Mayank Gupta (http://www.scrumalliance.org/articles/106-definition-of-done-a-reference), “An explicit and concrete definition of done may seem small but it can be the most critical checkpoint of an agile project.”  A concrete definition of done provides the basis for estimating velocity by reducing variability based on features that are in different states of completion.  Done also focuses the team by providing a goal to pursue. Make sure you have a crisp definition of done and recognize how that definition can change from sprint to sprint.

Related Metrics:

Productivity (size / effort)

Delivery Rate (duration / size)

Criticisms:

The first criticism of velocity is that the metric is not comparable between teams and by Inference is not useful as a benchmark. Velocity was conceived as a tool for Scrum Masters and Team Leads to manage and plan individual sprints. There are no overarching set of rules for the metric to enforce standardization therefore one velocity is apt to reflect something different than the next. The criticism is correct but perhaps off the mark. As a team level tool velocity works because it is very easy to use and can be consistent, adding the complexity of standards and rules to make it more organizational will by definition reduce the simplicity and therefore the usefulness at the team level.

A second criticism is that estimates and budgets are typically set early in a projects life.  Team level velocity may well be an unknown until later.  The dichotomy between estimating and planning (or budgeting and estimating for that matter) is often overlooked.  Estimates developed early in a project or in projects with multiple teams require different techniques to generate. In large projects applying team level velocities requires using techniques more akin to portfolio management which add significant levels of overhead. I would suggest that velocity is more valuable as a team planning tool than as a budgeting or estimation tool at a macro level.

A final criticism is that backlog items may not be defined at consistent level of granularity therefore when applied, velocity may deliver inconsistent results. I tend to dismiss this criticism as it is true for any mechanism that relies on relative sizing. Team consistency will help reduce the variability in sizing however all teams should strive to break backlog items into as atomic stories as possible.


Categories: Process Management

AppCompat v21 — Material Design for Pre-Lollipop Devices!

Android Developers Blog - Wed, 10/22/2014 - 22:58

By Chris Banes, Android Developer Relations

The Android 5.0 SDK was released last Friday, featuring new UI widgets and material design, our visual language focused on good design. To enable you to bring your latest designs to older Android platforms we have expanded our support libraries, including a major update to AppCompat, as well as new RecyclerView, CardView and Palette libraries.

In this post we'll take a look at what’s new in AppCompat and how you can use it to support material design in your apps.

What's new in AppCompat?

AppCompat (aka ActionBarCompat) started out as a backport of the Android 4.0 ActionBar API for devices running on Gingerbread, providing a common API layer on top of the backported implementation and the framework implementation. AppCompat v21 delivers an API and feature-set that is up-to-date with Android 5.0

In this release, Android introduces a new Toolbar widget. This is a generalization of the Action Bar pattern that gives you much more control and flexibility. Toolbar is a view in your hierarchy just like any other, making it easier to interleave with the rest of your views, animate it, and react to scroll events. You can also set it as your Activity’s action bar, meaning that your standard options menu actions will be display within it.

You’ve likely already been using the latest update to AppCompat for a while, it has been included in various Google app updates over the past few weeks, including Play Store and Play Newsstand. It has also been integrated into the Google I/O Android app, pictured above, which is open-source.

Setup

If you’re using Gradle, add appcompat as a dependency in your build.gradle file:

dependencies {
    compile "com.android.support:appcompat-v7:21.0.+"
}
New integration

If you are not currently using AppCompat, or you are starting from scratch, here's how to set it up:

  • All of your Activities must extend from ActionBarActivity, which extends from FragmentActivity from the v4 support library, so you can continue to use fragments.
  • All of your themes (that want an Action Bar/Toolbar) must inherit from Theme.AppCompat. There are variants available, including Light and NoActionBar.
  • When inflating anything to be displayed on the action bar (such as a SpinnerAdapter for list navigation in the toolbar), make sure you use the action bar’s themed context, retrieved via getSupportActionBar().getThemedContext().
  • You must use the static methods in MenuItemCompat for any action-related calls on a MenuItem.

For more information, see the Action Bar API guide which is a comprehensive guide on AppCompat.

Migration from previous setup

For most apps, you now only need one theme declaration, in values/:

values/themes.xml:

<style name="Theme.MyTheme" parent="Theme.AppCompat.Light">
    <!-- Set AppCompat’s actionBarStyle -->
    <item name="actionBarStyle">@style/MyActionBarStyle</item>

    <!-- Set AppCompat’s color theming attrs -->
    <item name=”colorPrimary”>@color/my_awesome_red</item>
    <item name=”colorPrimaryDark”>@color/my_awesome_darker_red</item>
    
    <!-- The rest of your attributes -->
</style>

You can now remove all of your values-v14+ Action Bar styles.

Theming

AppCompat has support for the new color palette theme attributes which allow you to easily customize your theme to fit your brand with primary and accent colors. For example:

values/themes.xml:

<style name="Theme.MyTheme" parent="Theme.AppCompat.Light">
    <!-- colorPrimary is used for the default action bar background -->
    <item name=”colorPrimary”>@color/my_awesome_color</item>

    <!-- colorPrimaryDark is used for the status bar -->
    <item name=”colorPrimaryDark”>@color/my_awesome_darker_color</item>

    <!-- colorAccent is used as the default value for colorControlActivated,
         which is used to tint widgets -->
    <item name=”colorAccent”>@color/accent</item>

    <!-- You can also set colorControlNormal, colorControlActivated
         colorControlHighlight, and colorSwitchThumbNormal. -->
    
</style>

When you set these attributes, AppCompat automatically propagates their values to the framework attributes on API 21+. This automatically colors the status bar and Overview (Recents) task entry.

On older platforms, AppCompat emulates the color theming where possible. At the moment this is limited to coloring the action bar and some widgets.

Widget tinting

When running on devices with Android 5.0, all of the widgets are tinted using the color theme attributes we just talked about. There are two main features which allow this on Lollipop: drawable tinting, and referencing theme attributes (of the form ?attr/foo) in drawables.

AppCompat provides similar behaviour on earlier versions of Android for a subset of UI widgets:

You don’t need to do anything special to make these work, just use these controls in your layouts as usual and AppCompat will do the rest (with some caveats; see the FAQ below).

Toolbar Widget

Toolbar is fully supported in AppCompat and has feature and API parity with the framework widget. In AppCompat, Toolbar is implemented in the android.support.v7.widget.Toolbar class. There are two ways to use Toolbar:

  • Use a Toolbar as an Action Bar when you want to use the existing Action Bar facilities (such as menu inflation and selection, ActionBarDrawerToggle, and so on) but want to have more control over its appearance.
  • Use a standalone Toolbar when you want to use the pattern in your app for situations that an Action Bar would not support; for example, showing multiple toolbars on the screen, spanning only part of the width, and so on.
Action Bar

To use Toolbar as an Action Bar, first disable the decor-provided Action Bar. The easiest way is to have your theme extend from Theme.AppCompat.NoActionBar (or its light variant).

Second, create a Toolbar instance, usually via your layout XML:

<android.support.v7.widget.Toolbar
    android:id=”@+id/my_awesome_toolbar”
    android:layout_height=”wrap_content”
    android:layout_width=”match_parent”
    android:minHeight=”?attr/actionBarSize”
    android:background=”?attr/colorPrimary” />

The height, width, background, and so on are totally up to you; these are just good examples. As Toolbar is just a ViewGroup, you can style and position it however you want.

Then in your Activity or Fragment, set the Toolbar to act as your Action Bar:

@Override
public void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.blah);

    Toolbar toolbar = (Toolbar) findViewById(R.id.my_awesome_toolbar);
    setSupportActionBar(toolbar);
}

From this point on, all menu items are displayed in your Toolbar, populated via the standard options menu callbacks.

Standalone

The difference in standalone mode is that you do not set the Toolbar to act as your action bar. For this reason, you can use any AppCompat theme and you do not need to disable the decor-provided Action Bar.

In standalone mode, you need to manually populate the Toolbar with content/actions. For instance, if you want it to display actions, you need to inflate a menu into it:

@Override
public void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.blah);

    Toolbar toolbar = (Toolbar) findViewById(R.id.my_awesome_toolbar);

    // Set an OnMenuItemClickListener to handle menu item clicks
    toolbar.setOnMenuItemClickListener(new Toolbar.OnMenuItemClickListener() {
        @Override
        public boolean onMenuItemClick(MenuItem item) {
            // Handle the menu item
            return true;
        }
    });

    // Inflate a menu to be displayed in the toolbar
    toolbar.inflateMenu(R.menu.your_toolbar_menu);
}

There are many other things you can do with Toolbar. For more information, see the Toolbar API reference.

Styling

Styling of Toolbar is done differently to the standard action bar, and is set directly onto the view.

Here's a basic style you should be using when you're using a Toolbar as your action bar:

<android.support.v7.widget.Toolbar  
    android:layout_height="wrap_content"
    android:layout_width="match_parent"
    android:minHeight="?attr/actionBarSize"
    app:theme="@style/ThemeOverlay.AppCompat.ActionBar" />

The app:theme declaration will make sure that your text and items are using solid colors (i.e 100% opacity white).

DarkActionBar

You can style Toolbar instances directly using layout attributes. To achieve a Toolbar which looks like 'DarkActionBar' (dark content, light overflow menu), provide the theme and popupTheme attributes:

<android.support.v7.widget.Toolbar
    android:layout_height=”wrap_content”
    android:layout_width=”match_parent”
    android:minHeight=”@dimen/triple_height_toolbar”
    app:theme="@style/ThemeOverlay.AppCompat.Dark.ActionBar"
    app:popupTheme="@style/ThemeOverlay.AppCompat.Light" />
SearchView Widget

AppCompat offers Lollipop’s updated SearchView API, which is far more customizable and styleable (queue the applause). We now use the Lollipop style structure instead of the old searchView* theme attributes.

Here’s how you style SearchView:

values/themes.xml:
<style name=”Theme.MyTheme” parent=”Theme.AppCompat”>
    <item name=”searchViewStyle”>@style/MySearchViewStyle</item>
</style>
<style name=”MySearchViewStyle” parent=”Widget.AppCompat.SearchView”>
    <!-- Background for the search query section (e.g. EditText) -->
    <item name="queryBackground">...</item>
    <!-- Background for the actions section (e.g. voice, submit) -->
    <item name="submitBackground">...</item>
    <!-- Close button icon -->
    <item name="closeIcon">...</item>
    <!-- Search button icon -->
    <item name="searchIcon">...</item>
    <!-- Go/commit button icon -->
    <item name="goIcon">...</item>
    <!-- Voice search button icon -->
    <item name="voiceIcon">...</item>
    <!-- Commit icon shown in the query suggestion row -->
    <item name="commitIcon">...</item>
    <!-- Layout for query suggestion rows -->
    <item name="suggestionRowLayout">...</item>
</style>

You do not need to set all (or any) of these, the defaults will work for the majority of apps.

Toolbar is coming...

Hopefully this post will help you get up and running with AppCompat and let you create some awesome material apps. Let us know in the comments/G+/Twitter if you’re have questions about AppCompat or any of the support libraries, or where we could provide more documentation.

FAQ
Why is my EditText (or other widget listed above) not being tinted correctly on my pre-Lollipop device?

The widget tinting in AppCompat works by intercepting any layout inflation and inserting a special tint-aware version of the widget in its place. For most people this will work fine, but I can think of a few scenarios where this won’t work, including:

  • You have your own custom version of the widget (i.e. you’ve extended EditText)
  • You are creating the EditText without a LayoutInflater (i.e., calling new EditText()).

The special tint-aware widgets are currently hidden as they’re an unfinished implementation detail. This may change in the future.

Why has X widget not been material-styled when running on pre-Lollipop?
Only some of the most common widgets have been updated so far. There are more coming in future releases of AppCompat.
Why does my Action Bar have a shadow on Android Lollipop? I’ve set android:windowContentOverlay to null.
On Lollipop, the action bar shadow is provided using the new elevation API. To remove it, either call getSupportActionBar().setElevation(0), or set the elevation attribute in your Action Bar style.
Why are there no ripples on pre-Lollipop?
A lot of what allows RippleDrawable to run smoothly is Android 5.0’s new RenderThread. To optimize for performance on previous versions of Android, we've left RippleDrawable out for now.
How do I use AppCompat with Preferences?
You can continue to use PreferenceFragment in your ActionBarActivity when running on an API v11+ device. For devices before that, you will need to provide a normal PreferenceActivity which is not material-styled.
Join the discussion on

+Android Developers
Categories: Programming

Episode 212: Randy Shoup on Company Culture

Tobias Kaatz talks to former Kixeye CTO Randy Shoup about company culture in the software industry in this sequel to the show on hiring in the software industry (Episode 208). Prior to Kixeye, Randy worked as director of engineering at Google for the Google App Engine and as chief engineer and distinguished architect at eBay. […]
Categories: Programming

Paper: Actor Model of Computation: Scalable Robust Information Systems

With Reactive Systems becoming the new old hotness, it will help to have a thorough grounding in the Actor Model. Here's a good start. Carl Hewitt in Actor Model of Computation: Scalable Robust Information Systems gives a very thorough and relatively concise explanation of the Actor model.

Here's the abstract.

The Actor model is a mathematical theory that treats "Actors" as the universal primitives of concurrent digital computation. The model has been used both as a framework for a theoretical understanding of concurrency, and as the theoretical basis for several practical implementations of concurrent systems. Unlike previous models of computation, the Actor model was inspired by physical laws. It was also influenced by the programming languages Lisp, Simula 67 and Smalltalk-72, as well as ideas for Petri Nets, capability-based systems and packet switching. The advent of massive concurrency through client-cloud computing and many-core computer architectures has galvanized interest in the Actor model.


Actor technology will see significant application for integrating all kinds of digital information for individuals, groups, and organizations so their information usefully links together. Information integration needs to make use of the following information system principles:
    * Persistence. Information is collected and indexed.
    * Concurrency: Work proceeds interactively and concurrently, overlapping in time.
    * Quasi-commutativity: Information can be used regardless of whether it initiates new work or become relevant to ongoing work.
    * Sponsorship: Sponsors provide resources for computation, i.e., processing, storage, and communications.
    * Pluralism: Information is heterogeneous, overlapping and often inconsistent.
    * Provenance: The provenance of information is carefully tracked and recorded

The Actor Model is intended to provide a foundation for inconsistency robust information integration.

Related Articles
Categories: Architecture

Baloney Claims: Pseudo–science and the Art of Software Methods

Herding Cats - Glen Alleman - Wed, 10/22/2014 - 16:40

Ascertaining of the success and applicability of any claims made that are outside the accepted practices of business, engineering, or governance processes require careful testing of ideas through tangible evidence they are actually going to do what it is conjectured they're suppose to do.

The structure of this checklist is taken directly from Scientific American's essay on scientific baloney, but sure feels right for many of the outrageous claims found in today's software development community about approaches to estimating the cost, schedule, and likely outcomes.

How reliable is the 
source of the claim?

Self-pronounced experts often appear credible at first glance, but when examined more closely, the facts and figures they cite are distorted, taken out of context, long out of date, mathematically wrong, missing critical domain and context basis, or occasionally even fabricated.

In many instances the data used to support the claims are weak or poorly formed. Relying on surveys of friends or hearsay, small population samples, classroom experiments, or worse anecdotal evidence where the expert extends personal experience to a larger population.

Does this source often make similar claims?

Self pronounced experts have a habit of going well beyond the facts and generalizing their claims to a larger population of problems or domains. Many proponents of ideas make claims that cannot be substantiated within a testable framework. This is the nature of early development  in the engineering world. Of course, some great thinkers do frequently go beyond the data in their creative speculations.

But when those creative thinkers are used to support the new claims it's more suspect the hard work of testing the claim outside of personal experience hasn't been performed. 

They said agile wouldn't work, so my conjecture is getting the same criticism and I'll be considered just like those guys when I'm proven right.

Have the claims been verified by another source?
 

Typically self pronounced experts make statements that are unverified or verified only by a source within their own private circle, or who's conclusions are based primarily on anecdotal information.

We must ask, who is checking the claims, and even who is checking the checkers? Outside verification is crucial to good business decisions as it is crucial to good methodology development.

How does the claim fit with what we
know about how the world works? 

 

Any specific claim must be placed into a larger context to see how it fits. When people claim that a specific method, approach, or technique results in significant benefits, dramatic changes in an outcome, etc. they are usually not presenting the specific context for the application of their idea.

Such a claim is typically not supported by quantitative statistics as well. There may be qualitative data, but this is likely to be biased by the experimental method as well as the underlying population of the sample statistics.

In most cases to date, the sample size is minuscule compared to that needed to draw correlations and causation's to the conjectured outcomes.

Has anyone gone out
of the way to disprove the claim, or has only supportive evidence
been sought? 

 

This is the confirmation bias, or the tendency to seek confirmatory evidence and to reject or ignore dis–confirmatory evidence. The confirmation bias is powerful, pervasive and almost impossible to avoid.

It is why the methods that emphasize checking and rechecking, verification and replication, and especially attempts to falsify a claim, are critical.

When self-selected communities see external criticism as  harassment or you're simply not getting it, or  those people are just like talking to a box of rocks, the confirmation bias is in full force.

Does the preponderance of evidence point to the claimant's conclusion or to a different one?
 

Evidence is the basis of all confirmation processes. The problem is having evidence alone is necessary but not sufficient. The evidence must somehow be "predicted" by the process, fit the process model, or somehow participate in the process in a supportive manner.

Is the claimant employing the
accepted rules of reason and tools of research, or have
these been abandoned in favor of others that lead to the desired conclusion?
 

Unique and innovative ways of conducting research, process data, and "conjecturing" about the results are not statistically sound. In almost every discipline there are accepted mechanisms for conducting research. One of the first courses taken in graduate school is quantitative methods  for experiments. This course sets the ground rules for conducting research in the field.

Is the claimant providing an explanation for the observed phenomena or merely denying the existing explanation? 

This is a classic debate strategy—criticize your opponent and never affirm what you believe to avoid criticism.

Show us your data, is that starting point for engaging in a conversation about a speculative idea.

If the claimant proffers a new explanation, does it account for as many phenomena as the old explanation did? 

This concept is usually lost on "innovative" claims. The need to explain previous results is mandatory. Without this bridge to past results, a new suggested approach has no foundation for acceptance.

Do the claimant's personal beliefs and biases drive the conclusions, or vice versa?
 

All claimants hold social, political and ideological beliefs that could potentially slant their interpretations of the data, but how do those biases and beliefs affect their research in practice?

Usually during some peer-review system, such biases and beliefs are rooted out, or the paper or book is rejected.

In the absence of peer review - self publishing is popular these days - there is no external assessment of the ideas and therefore the author reinforces of the confirmation bias.

 So the next time you hear a suggestion that appears to violate a principles of  business, economics, or even physics, think of these questions. So let's move to the #NoEstimates suggestion that we can make decisions in the absence of estimate, that is we can make decisions about a future outcome in absence of estimating the cost to acheive that outcome and the impact of that outcome.

The core question is how can this conjecture be tested beyond the personal anecdotes of those proffering the notion that decisions can be made in the absence of estimates? Certainly those making the claim have no interest in performing that test. It's incumbant on those attempting to apply the notion to first test if for validity, applicability, and simple credibility. 

A final recommendation is Ken Schwaber's talk and slides to think about evidence based discussions around improving the business of software development. And the book he gave away at the end of the talk Hard Facts, Dangerous Half-Truths And Total Nonsense: Profiting From Evidence-Based Management

Related articles Falsifiability and Junk Science
Categories: Project Management

Think a Series of Sprints, Not Marathons

When you drive business change and digital initiatives with Cloud, Mobile, Social, and Big Data (and Internet of Things), successful businesses think a series of sprints, not marathons.

Successful businesses go digital by transforming their customer experiences, their employee experiences, and their back-office experiences through rapid prototyping, building proofs-of-concept, testing pilots, and going to production.  It’s a fast cycle of prototype –> pilot –> POC –> production.

These short cycles create rapid learning loops, build momentum, and help adapt for change.

In the book, Leading Digital: Turning Technology into Business Transformation, George Westerman, Didier Bonnet, and Andrew McAfee, share some of their lessons learned in driving digital initiatives and agile transformation.

The Digital World Moves Quickly

Avoid Big Up Front Design.  Whenever there is a big lag time between designing it, developing it, and using it, you’re introducing more risk.  You’re breaking feedback loops.  You’re falling into the pit of analysis paralysis.   Focus on “just enough design” so that you can test what works and what doesn’t, and respond accordingly.

Via Leading Digital:

“The digital world moves quickly.  The rapid pace of technology innovation today does not lend itself to multiyear planning and waterfall development methods common in the ERP era.  Markets change, new technologies become mainstream, an disruptive entrants begin courting your customers.  Your roadmap will need to be nimble enough to recognize these changes, adapt for them, and course-correct.”

Keep a Vision in Mind and Build on Success Along the Way

Hold on to the vision and use that to guide you as you test your ideas and implement them, without getting bogged down.

Via Leading Digital:

“To design an agile transformation, borrow an approach that has become common among today's leading software companies.  Keep people committed to the end goal, but pace your initiatives as short sprints of effort.  Create prototype solutions, and experiment with new technologies or approaches.  Evaluate the results, and incorporate the results into your evolving roadmap.  Adam Brotman, Starbucks CDO, explained the iterative process: 'We didn't have all the answers, but we started thinking about other things we could do ... I think it worked not to go too far, too fast, but to keep a vision in mind and keep building on success along the way.”

Test Ideas, Save Time, Adapt to Changes

Short cycle times help you respond to market change and adapt as you learn what works and what doesn’t.

Via Leading Digital:

“The test-and-learn approach will require some new ways of working in its own right, but it enjoys some distinct advantages.  By marketing ideas quickly before they go to scale, this approach saves time and money.  It's short cycle times also make it more adaptive to external changes.  Finally, it enables your transformation to sustain momentum through small, incremental successes, rather than the big-bang approach of long-term programs.”

When it comes to your digital strategy and driving business transformation, drive your business change the agile way.

You Might Also Like

10 Ways to Make Agile Design More Effective

Building Better Business Cases for Digital Initiatives

Cloud Changes the Game from Deployment to Adoption

How Digital is Changing Physical Experiences

McKinsey on Unleashing the Value of Big Data Analytics

Categories: Architecture, Programming

Software Development Linkopedia October 2014

From the Editor of Methods & Tools - Wed, 10/22/2014 - 15:04
Here is our monthly selection of interesting knowledge material on programming, software testing and project management.  This month you will find some interesting information and opinions about managing code duplication, product backlog prioritization, engineering management, organizational culture, mobile testing, code reuse and big data. Blog: Practical Guide to Code Clones (Part 1) Blog: Practical Guide to Code Clones (Part 2) Blog: Selecting Backlog Items By Cost of Delay Blog: WIP and Priorities – how to get fast and focused! Blog: 44 engineering management lessons Article: Do’s and Don’ts of Agile Retrospectives Article: Organizational Culture Considerations with Agile Article: ...

Comparing solutions

Coding the Architecture - Simon Brown - Wed, 10/22/2014 - 14:38

Here's a little snippet that my class really picked up on yesterday. During the training course, we get people into groups and ask them to design a solution based upon some simple requirements. The deliverable is "one or more diagrams to describe your solution". Aside from answering a few questions about the business domain and the environment, that's pretty much all the guidance that groups get.

As you can probably imagine, the resulting diagrams are all very different. Some are very high-level, others very low-level. Some show static structure, others show runtime and behavioural views. Some show technology, others don't. Without a consistent approach, these differing diagrams make it hard for people to understand the solutions being presented to them. But furthermore, the differing diagrams make it really hard to compare solutions too.

As I've said before, I don't actually teach people to draw pictures. What I do instead is to teach people how to think about, and therefore describe, their software using a simple set of abstractions. This is my C4 model. With these abstractions in place, groups then redraw their diagrams. Despite the notations still differing between the groups, the solutions are much easier to understand. The solutions are much easier to compare too, because of the consistency in the way they are being described. A common set of abstractions is much more important than a common notation.

Categories: Architecture

Agile Concepts: The Rationale for Longer Cadence

 

Space between has to big enough and no bigger.

Space between has to big enough and no bigger.

Cadence represents a predictable rhythm. The predictability of cadence is crucial for Agile teams to build trust. Trust is built between the team and the organization and  amongst the team members themselves by meeting commitments over and over and over. The power of cadence is important to the team’s well being, therefore the choice of cadence is often debated in new teams. Many young teams make the mistake of choosing a slower (longer) cadence for many reasons. The most often cited reasons I have found are:

  1. The team has not adopted or adapted their development process to the concept of delivering working functionality in a single sprint. When a team leverages a waterfall or remnants of a waterfall method, work passes from phase to phase at sprint boundaries. For example, it passes from coding to testing. Longer time boxes feel appropriate to the team so they can get analysis, design, coding, testing and implementation done before the next sprint. The problem is that they are trying to “agilefy” a non-Agile process, which rarely works well.
  2. New Agile teams tend to lack confidence in their capabilities. Capabilities that teams need to sort out include both learning the techniques of Agile and abilities of the team members. Teams convince themselves that a longer sprint will provide a bit of padding that will accommodate the learning process. The problem with adding padding is twofold. The first is that time tend to fill the available time (Parkinson’s Law) and secondly lengthening the sprint delays retrospectives. Retrospectives provide a team the platform needed to identify issues and make changes which leads to improved capabilities.
  3. Large stories that can’t be completed in a single sprint are often noted as reason to adopt longer duration sprints and slower cadence. This is generally a reflection of improper backlog grooming. More mature Agile teams typically adopt a rule of thumb help guide the breakdown of stories. Examples include maximum size limits (e.g. 8 story points, 7 quick function points) or duration limits (a story must be able to be completed in 3 days).
  4. Planning sessions take too long, eating into development time of the sprint. Similar to the large stories, overly long planning sessions are typically a reflection of either poor backlog grooming or trying to plan and commit to more than can be done within the sprint time box. Teams often change the length of a sprint rather than doing better grooming or taking less work. Often even when the sprint duration is expanded the problem of overly long planning sessions returns as more stories are taken and worsens as the team gets bored with planning.

Teams often think that they can solve process problems by lengthening the duration of their sprints, which slows their cadence. Typically a better solution is to make sure they are practicing Agile techniques rather than trying to “agilefy” a waterfall method or doing a better job grooming stories. A faster cadence is generally better if for no other reason than the team will get to review their approach faster by doing retrospectives!


Categories: Process Management

Welcome Firebase to the Google Cloud Platform Team

Google Code Blog - Tue, 10/21/2014 - 18:31
Today we extend a warm welcome to Firebase, who is joining the Google Cloud Platform team. Firebase makes it very easy for developers to build mobile and web apps that store and sync data in realtime.

Mobile is one of the fastest-growing categories of app development, but it’s also still too hard for most developers. With Firebase, developers are able to easily sync data across web and mobile apps without having to manage connections or write complex sync logic. Firebase makes it easy to build applications that work offline and has full-featured libraries for all major web and mobile platforms, including Android and iOS.

By combining Firebase with Google Cloud Platform, we’ll be able to build the best end-to-end platform for mobile application development. If you’re already a Firebase developer, you’ll start seeing improvements right away and if you’re a Google Cloud Platform customer, you’ll find it even easier to create great mobile and web apps. The entire Firebase team is joining Google and, under the leadership of Firebase co-founders James Tamplin and Andrew Lee, will be working hard to bring you great new features. Not only will the products you already love continue to get better, but you’ll also gain access to the full power of Google Cloud Platform.

At Google Cloud Platform Live on November 4, we’ll be demonstrating new Firebase features and integrations with Cloud Platform. You can join us there in person or you can register to stream online for free.

If you are looking for more info check out the Firebase blog. We can’t wait to see what applications you build!

Greg DeMichillie, Director of Product Management
Categories: Programming

2 minute models: A walk through the Business Objectives Model

Software Requirements Blog - Seilevel.com - Tue, 10/21/2014 - 17:00
Please join me for a quick walk through our Business Objectives Model. This video only scratches the surface of how valuable this model really is, and how it can be used for a variety of projects. Please feel free to make suggestions and ask questions in the comments section, and I will address them in […]
Categories: Requirements

Quote of the Day

Herding Cats - Glen Alleman - Tue, 10/21/2014 - 16:23

If a man will begin with certainties, he shall end in doubts, but if he will be content to begin with doubts, he shall end in certainties. - Francis Bacon

Everything in project work is uncertainty. Estimating is required to discover the extent of the range of these uncertainties, the impacts of the uncertainties on cost, schedule, and the performance of the products or services.

To suggest decisions can be made in the presence of these uncertainties without knowledge of the future outcomes and the cost of achieving these outcomes, or deciding between alternatives with future outcomes ignores completely the notion of uncertainty and microeconomics of decision making in the presence of these uncertainties.

Categories: Project Management

Is Your Culture Working the Way You Think it Is?

Long ago, I was a project manager and senior engineer for a company undergoing a Change Transformation. You know the kind, where the culture changes, along with the process. The senior managers had bought into the changes. The middle managers were muddling through, implementing the changes as best they could.

Us project managers and the technical staff, we were the ones doing the bulk of the changes. The changes weren’t as significant as an agile transformation, but they were big.

One day, the Big Bosses, the CEO and the VP Engineering spoke at an all-hands meeting. “You are empowered,” they said. No, they didn’t say it as a duet. They each said it separately. They had choreographed speeches, with great slide shows, eight by ten color glossies, and pictures. They had a vision. They just knew what the future would hold.

I managed to keep my big mouth shut.

The company was not doing well. We had too many managers for not enough engineers or contracts. If you could count, you could see that.

I was traveling back and forth to a client in the midwest. At one point, the company owed me four weeks of travel expenses. I quietly explained that no, I was not going to book any more airline travel or hotel nights until I was paid in full for my previous travel.

“I’m empowered. I can refuse to get on a plane.”

That did not go over well with anyone except my boss, who was in hysterics. He thought it was quite funny. My boss agreed I should be reimbursed before I racked up more charges.

Somehow, they did manage to reimburse me. I explained that from now on, I was not going to float the company more than a week’s worth of expenses. If they wanted me to travel, I expected to be reimbursed within a week of travel. I got my expenses in the following Monday. They could reimburse me four days later, on Friday.

“But that’s too fast for us,” explained one of the people in Accounting.

“Then I don’t have to travel every other week,” I explained. “You see, I’m empowered. I’ll travel after I get the money for the previous trip. I won’t make a new reservation until I receive all the money I spent for all my previous trips. It’s fine with me. You’ll just have to decide how important this project is. It’s okay.”

The VP came to me and tried to talk me out of it. I didn’t budge. (Imagine that!) I told him that I didn’t need to float the company money. I was empowered.

“Do you like that word?”

“Sure I do.”

“Do you feel empowered?”

“Not at all. I have no power at all, except over my actions. I have plenty of power over what I choose to do. I am exercising that power. I realized that during your dog and pony show.

“You’re not changing our culture. You’re making it more difficult for me to do my job. That’s fine. I’m explaining how I will work.”

The company didn’t get a contract it had expected. It had a layoff. Guess who got laid off? Yes, I did. It was a good thing. I got a better job for more money. And, I didn’t have to travel every other week.

Change can be great for an organization. But telling people the culture is one thing and then living up to that thing can be difficult. That’s why this month’s management myth is Myth 34: You’re Empowered Because I Say You Are.

I picked on empowerment. I could have chosen “open door.” Or “employees are our greatest asset.” (Just read that sentence. Asset???)

How you talk about culture says a lot about what the culture is. Remember, culture is how you treat people, what you reward, and what is okay to talk about.

Go read Myth 34: You’re Empowered Because I Say You Are.

Categories: Project Management

Velocity-Driven Sprint Planning

Mike Cohn's Blog - Tue, 10/21/2014 - 15:00

There are two general approaches to planning sprints: velocity-driven planning and commitment-driven planning. Over the next three posts, I will describe each approach, and make my case for the one I prefer.

Let’s begin with velocity-driven sprint planning because it’s the easiest to describe. Velocity-driven sprint planning is based on the premise that the amount of work a team will do in the coming sprint is roughly equal to what they’ve done in prior sprints. This is a very valid premise.

This does assume, of course, things such as a constant team size, the team is working on similar work from sprint to sprint, consistent sprint lengths, and so on. Each of these assumptions is generally valid and violations of the assumptions are easily identified—that is, when a sprint changes from 10 to nine working days as in the case of a holiday, the team knows this in advance.

The steps in velocity-driven sprint planning are as follows:

1. Determine the team’s historical average velocity.
2. Select a number of product backlog items equal to that velocity.

Some teams stop there. Others include the additional step of:

3. Identify the tasks involved in the selected user stories and see if it feels like the right amount of work.

And some teams will go even further and:

4. Estimate the tasks and see if the sum of the work is in line with past sprints.

Let’s look in more detail at each of these steps.

Step 1. Select the Team’s Average Velocity

The first step in velocity-driven sprint planning is determining the team’s average velocity. Some agile teams prefer to look only at their most recent velocity. Their logic is that the most recent sprint is the single best indicator of how much will be accomplished in the next sprint. While this is certainly valid, I prefer to look back over a longer period if the team has the data.

I prefer to take the average of the eight most recent sprints. I’ve found that to be fairly predictive of the future. I certainly wouldn’t take the average over the last 10 years. How fast a team was going during the Bush Administration is irrelevant to that team’s velocity today. But, similarly, if you have the data, I don’t think you should look back only one sprint.

Step 2: Select Product Backlog Items

Armed with an average velocity, team members select the top items from the product backlog. They grab items up to or equal to the average velocity.

At this point, velocity-driven planning in the strictest sense is finished. Rather than taking an hour or two per week of the sprint, sprint planning is done in a matter of seconds. As quickly as the team can add up to their average velocity, they’re done.

Step 3: Identify Tasks (Optional)

Most teams who favor velocity-driven sprint planning realize that planning a sprint in a few seconds is probably insufficient. And so many will add in this third step of identifying the tasks to be performed.

To do this, team members discuss each of the product backlog items that have been selected for the sprint and attempt to identify the key steps necessary in delivering each product backlog item. Teams can’t possibly think of everything so they instead make a good faith effort to think of all they can.

After having identified most of the tasks that will ultimately be necessary, team members look at the selected product backlog item and the tasks for each, and decide if the sprint seems full. They may conclude that the sprint is overfilled or insufficiently filled and add or drop product backlog items. At this point, some teams are done and they conclude sprint planning with a set of selected product backlog items and a list of the tasks to deliver them. Some teams proceed though to a final step:

Step 4: Estimate the Tasks (Optional)

With a nice list of tasks in front of them, some teams decide to go ahead and estimate those tasks in hours, as an aid in helping them decide if they’ve selected the right amount of work.

In next week’s post, I’ll describe an alternative approach, commitment-driven sprint planning. And after that, I’ll share some recommendations.

What's New in Android 5.0 Lollipop

Android Developers Blog - Tue, 10/21/2014 - 09:06

By Ankur Kotwal, Developer Advocate

Android 5.0 Lollipop is the biggest update of Android to date, introducing an all new visual style, improved performance, and much more. Android 5.0 Lollipop also extends across screens big and small, including phones, tablets, wearables, TVs and cars, to give your users access to information when they need it most.

To get you started on developing and testing on Android 5.0 Lollipop, here are some of the developer highlights with links to related videos and documentation.

User experience
  • Material design for the multiscreen world — Material Design is a new approach for designing apps in today’s multi-device world that takes a comprehensive strategy to visual, motion, and interaction design across a number of platforms and form factors. Android 5.0 brings Material Design to the platform, with a full set of tools for implementing material design in your apps. The system is incredibly flexible, allowing your app to express its individual character and brand with bold colors and a variety of responsive UI patterns and themeable elements.
  • Enhanced notifications — New lockscreen notifications let you surface content, updates, and actions to users at a glance, without needing to unlock their device. Heads-up notifications let you display content and actions in a small floating window managed by the system, no matter which app is in the foreground. Notifications are refreshed for Material Design and you can use accent colors to express your brand.
  • Concurrent documents in Overview — Now you can organize your app by tasks and present these concurrently as individual “documents” on the Overview screen. For example, instant messaging apps could declare each chat as a separate document. Users can flip through these on the Overview screen to find the specific chat they want and jump straight to it.
Performance
  • Android Runtime (ART) — Android 5.0 runs exclusively on the ART runtime. ART offers ahead-of-time (AOT) compilation, more efficient garbage collection, and improved development and debugging features. In many cases it improves performance of the device, without you having to change your code.
  • 64-bit support — Support for 64-bit ABIs provides additional address space and improved performance with certain compute workloads. Apps written in the Java language can run immediately on 64-bit architectures with no modifications required. NDK r10c includes 64-bit support, for apps and games using native code.
  • Project Volta — New tools and APIs help you build battery-efficient apps. Battery Historian, a tool included in the SDK, lets you visualize power events over time and understand how your app is using battery. The JobScheduler API lets you set the conditions under which your background tasks and other jobs should run, such as when the device is idle or connected to an unmetered network or to a charger, to minimize battery impact. More in this I/O video.
  • OpenGL ES 3.1 and Android Extension Pack — With OpenGL ES 3.1, you get compute shaders, stencil textures, and texture gather for your games. Android Extension Pack (AEP) is a new set of extensions to OpenGL ES that bring desktop-class graphics to Android including tessellation and geometry shaders, and use ASTC texture compression across GPU technologies. More on what's new for game developers in this DevBytes video.
  • WebView updates — We’ve updated WebView to support WebRTC, WebAudio and WebGL will be supported. WebView also includes native support for all of the Web Components specifications: Custom Elements, Shadow DOM, HTML Imports, and Templates. WebView is now unbundled from the system and will be regularly updated through Google Play.
Workplace
  • Managed provisioning and unified view of apps — to make it easier for employees to have a single device for personal and work use, framework enhancements offer a unified view of apps, notifications & recents across work apps and personal apps. Profile owner APIs, in the workplace context, let administrators create and manage work profiles and defined as part of a new managed provisioning process. More in this I/O video.
Media
  • Advanced camera capabilities — A new camera API gives you new capabilities for advanced image capture and processing. On supported devices, your app can capture uncompressed YUV capture at full 8 megapixel resolution at 30 FPS. You can also capture raw sensor data and control parameters such as exposure time, ISO sensitivity, and frame duration, on a per-frame basis.
  • Audio improvements — The sound architecture has been enhanced, with lower input latency in OpenSL, the addition of multichannel-mixing, and USB digital audio mode support. More in this I/O video.
Connectivity
  • BLE Peripheral Mode — Android devices can now function in Bluetooth Low Energy (BLE) peripheral mode. Apps can use this capability to broadcast their presence to nearby devices — for example, you can now build apps that let a device function as a beacon and transmit data to another BLE device. More in this I/O video.
  • Multi-networking — Apps can dynamically request networks based on capabilities such as metered or unmetered. This is useful when you want to use a specific network, such as cellular. Apps can also request platform to re-evaluate networks for an internet connection. This is useful when your app sees unusually high latency on a particular network, it can enable the platform to switch to a better network (if available) sooner with a graceful handoff.
Get started!

You can get started developing and testing on Android 5.0 right away by downloading the Android 5.0 Platform (API level 21), as well as the SDK Tools, Platform Tools, and Support Package from the Android SDK Manager.

Check out the DevByte video below for more of what’s new in Lollipop!



Join the discussion on
+Android Developers


Categories: Programming

Agile Concepts: Cadence

 In today’s business environment a plurality of organizations use a two week sprint cadence.

In today’s business environment a plurality of organizations use a two week sprint cadence.

In Agile, cadence is the number days or weeks in a sprint or release. Stated another way, it is the length of the team’s development cycle. In today’s business environment a plurality of organizations use a two week sprint cadence. The cadence that a project or organization selects is based on a number of factors that include: criticality, risk and the type of project.

A ‘critical’ IT project would be crucial, decisive or vital. Projects or any kind of work that can be defined as critical need to be given every change to succeed. Feedback is important for keeping critical projects pointed in the right direction. Projects that are highly important will benefit from gathering feedback early and often. The Agile cycle of planning, demonstrating progress and retrospect-ing is tailor-made to gather feedback and then act on that feedback. A shorter cycle leads to a faster cadence and quicker feedback.

Similarly, projects with higher levels of risk will benefit from faster feedback so that the team and the organization can evaluate whether the risk is being mitigated or whether risk is being converted into reality. Feedback reduces the potential for surprises therefore faster cadences is a good tool for reducing some forms of risk.

The type of project can have an impact on cadence. Projects that include hardware engineering or interfaces with heavyweight approval mechanisms will tend to have slower cadences. For example, a project I was recently asked about required two separate approval board reviews (one regulatory and the second security related). Both took approximately five working days. The length of time required for the reviews was not significantly impacted by the amount of work each group needed to approve. The team adopted a four-week cadence to minimize the potential for downtime while waiting for feedback and to reduce the rework risk of going forward without approval. Maintenance projects, on the other hand, can often leverage Kanban or Scrumban in more of a continuous flow approach (no time box).

Development cadence is not synonymous with release cadence. In many Agile techniques, the sprint cadence and the release cadence do not have to be the same. The Scaled Agile Framework Enterprise (SAFe) makes the point that teams should develop on a cadence, but release on demand. Many teams use a fast development cadence only to release in large chunks (often called releases). When completed work is released often either as a reflection of business need, an artifact in thinking from waterfall development and, in some rare cases, the organization’s operational environment.

Most projects will benefit from faster feedback. Shorter cycles, i.e. faster cadence, are an important tool for generating feedback and reducing risk. A faster cadence is almost always the right answer, unless you really don’t want to know what is happening while you can react.


Categories: Process Management

Neo4j: Modelling sub types

Mark Needham - Tue, 10/21/2014 - 00:08

A question which sometimes comes up when discussing graph data modelling is how you go about modelling sub/super types.

In my experience there are two reasons why we might want to do this:

  • To ensure that certain properties exist on bits of data
  • To write drill down queries based on those types

At the moment the former isn’t built into Neo4j and you’d only be able to achieve it by wiring up some code in a pre commit hook of a transaction event handler so we’ll focus on the latter.

The typical example used for showing how to design sub types is the animal kingdom and I managed to find a data set from Louiseville, Kentucky’s Animal Services which we can use.

In this case the sub types are used to represent the type of animal, breed group and breed. We then also have ‘real data’ in terms of actual dogs under the care of animal services.

We effectively end up with two graphs in one – a model and a meta model:

2014 10 20 22 32 31

The cypher query to create this graph looks like this:

LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-subtypes/data/dogs.csv" AS line
MERGE (animalType:AnimalType {name: "Dog"})
MERGE (breedGroup:BreedGroup {name: line.BreedGroup})
MERGE (breed:Breed {name: line.PrimaryBreed})
MERGE (animal:Animal {id: line.TagIdentity, primaryColour: line.PrimaryColour, size: line.Size})
 
MERGE (animalType)<-[:PARENT]-(breedGroup)
MERGE (breedGroup)<-[:PARENT]-(breed)
MERGE (breed)<-[:PARENT]-(animal)

We could then write a simple query to find out how many dogs we have:

MATCH (animalType:AnimalType)<-[:PARENT*]-(animal)
RETURN animalType, COUNT(*) AS animals
ORDER BY animals DESC
==> +--------------------------------+
==> | animalType           | animals |
==> +--------------------------------+
==> | Node[89]{name:"Dog"} | 131     |
==> +--------------------------------+
==> 1 row

Or we could write a slightly more complex query to find the number of animals at each level of our type hierarchy:

MATCH path = (animalType:AnimalType)<-[:PARENT]-(breedGroup)<-[:PARENT*]-(animal)
RETURN [node IN nodes(path) | node.name][..-1] AS breed, COUNT(*) AS animals
ORDER BY animals DESC
LIMIT 5
==> +-----------------------------------------------------+
==> | breed                                     | animals |
==> +-----------------------------------------------------+
==> | ["Dog","SETTER/RETRIEVE","LABRADOR RETR"] | 15      |
==> | ["Dog","SETTER/RETRIEVE","GOLDEN RETR"]   | 13      |
==> | ["Dog","POODLE","POODLE MIN"]             | 10      |
==> | ["Dog","TERRIER","MIN PINSCHER"]          | 9       |
==> | ["Dog","SHEPHERD","WELSH CORGI CAR"]      | 6       |
==> +-----------------------------------------------------+
==> 5 rows

We might then decide to add an exercise sub graph which indicates how much exercise each type of dog requires:

MATCH (breedGroup:BreedGroup)
WHERE breedGroup.name IN ["SETTER/RETRIEVE", "POODLE"]
MERGE (exercise:Exercise {type: "2 hours hard exercise"})
MERGE (exercise)<-[:REQUIRES_EXERCISE]-(breedGroup);
MATCH (breedGroup:BreedGroup)
WHERE breedGroup.name IN ["TERRIER", "SHEPHERD"]
MERGE (exercise:Exercise {type: "1 hour gentle exercise"})
MERGE (exercise)<-[:REQUIRES_EXERCISE]-(breedGroup);

We could then query that to find out which dogs need to come out for 2 hours of hard exercise:

MATCH (exercise:Exercise {type: "2 hours hard exercise"})<-[:REQUIRES_EXERCISE]-()<-[:PARENT*]-(dog)
WHERE NOT (dog)<-[:PARENT]-()
RETURN dog
LIMIT 10
==> +-----------------------------------------------------------+
==> | dog                                                       |
==> +-----------------------------------------------------------+
==> | Node[541]{id:"664427",primaryColour:"BLACK",size:"SMALL"} |
==> | Node[542]{id:"543787",primaryColour:"BLACK",size:"SMALL"} |
==> | Node[543]{id:"584021",primaryColour:"BLACK",size:"SMALL"} |
==> | Node[544]{id:"584022",primaryColour:"BLACK",size:"SMALL"} |
==> | Node[545]{id:"664430",primaryColour:"BLACK",size:"SMALL"} |
==> | Node[546]{id:"535176",primaryColour:"BLACK",size:"SMALL"} |
==> | Node[567]{id:"613557",primaryColour:"WHITE",size:"SMALL"} |
==> | Node[568]{id:"531376",primaryColour:"WHITE",size:"SMALL"} |
==> | Node[569]{id:"613567",primaryColour:"WHITE",size:"SMALL"} |
==> | Node[570]{id:"531379",primaryColour:"WHITE",size:"SMALL"} |
==> +-----------------------------------------------------------+
==> 10 rows

In this query we ensured that we only returned dogs rather than breeds by checking that there was no incoming PARENT relationship. Alternatively we could have filtered on the Animal label…

MATCH (exercise:Exercise {type: "2 hours hard exercise"})<-[:REQUIRES_EXERCISE]-()<-[:PARENT*]-(dog:Animal)
RETURN dog
LIMIT 10

or if we wanted to only take the dogs out for exercise perhaps we’d have Dog label on the appropriate nodes.

People are often curious why labels don’t have super/sub types between them but I tend to use labels for simple categorisation – anything more complicated and we may as well use the built in power of the graph model!

The code is on github should you wish to play with it.

Categories: Programming

Your Community Door

Coding Horror - Jeff Atwood - Mon, 10/20/2014 - 20:32

What are the real world consequences to signing up for a Twitter or Facebook account through Tor and spewing hate toward other human beings?

Facebook reviewed the comment I reported and found it doesn't violate their Community Standards. pic.twitter.com/p9syG7oPM1

— Rob Beschizza (@Beschizza) October 15, 2014

As far as I can tell, nothing. There are barely any online consequences, even if the content is reported.

But there should be.

The problem is that Twitter and Facebook aim to be discussion platforms for "everyone", where every person, no matter how hateful and crazy they may be, gets a turn on the microphone. They get to be heard.

The hover text for this one is so good it deserves escalation:

I can't remember where I heard this, but someone once said that defending a position by citing free speech is sort of the ultimate concession; you're saying that the most compelling thing you can say for your position is that it's not literally illegal to express.

If the discussion platform you're using aims to be a public platform for the whole world, there are some pretty terrible things people can do and say to other people there with no real consequences, under the noble banner of free speech.

It can be challenging.

How do we show people like this the door? You can block, you can hide, you can mute. But what you can't do is show them the door, because it's not your house. It's Facebook's house. It's their door, and the rules say the whole world has to be accommodated within the Facebook community. So mute and block and so forth are the only options available. But they are anemic, barely workable options.

As we build Discourse, I've discovered that I am deeply opposed to mute and block functions. I think that's because the whole concept of Discourse is that it is your house. And mute and ignore, while arguably unavoidable for large worldwide communities, are actively dangerous for smaller communities. Here's why.

  • It allows you to ignore bad behavior. If someone is hateful or harassing, why complain? Just mute. No more problem. Except everyone else still gets to see a person being hateful or harassing to another human being in public. Which means you are now sending a message to all other readers that this is behavior that is OK and accepted in your house.

  • It puts the burden on the user. A kind of victim blaming — if someone is rude to you, then "why didn't you just mute / block them?" The solution is right there in front of you, why didn't you learn to use the software right? Why don't you take some responsibility and take action to stop the person abusing you? Every single time it happens, over and over again?

  • It does not address the problematic behavior. A mute is invisible to everyone. So the person who is getting muted by 10 other users is getting zero feedback that their behavior is causing problems. It's also giving zero feedback to moderators that this person should probably get an intervention at the very least, if not outright suspended. It's so bad that people are building their own crowdsourced block lists for Twitter.

  • It causes discussions to break down. Fine, you mute someone, so you "never" see that person's posts. But then another user you like quotes the muted user in their post, or references their @name, or replies to their post. Do you then suppress just the quoted section? Suppress the @name? Suppress all replies to their posts, too? This leaves big holes in the conversation and presents many hairy technical challenges. Given enough personal mutes and blocks and ignores, all conversation becomes a weird patchwork of partially visible statements.

  • This is your house and your rules. This isn't Twitter or Facebook or some other giant public website with an expectation that "everyone" will be welcome. This is your house, with your rules, and your community. If someone can't behave themselves to the point that they are consistently rude and obnoxious and unkind to others, you don't ask the other people in the house to please ignore it – you ask them to leave your house. Engendering some weird expectation of "everyone is allowed here" sends the wrong message. Otherwise your house no longer belongs to you, and that's a very bad place to be.

I worry that people are learning the wrong lessons from the way Twitter and Facebook poorly handle these situations. Their hands are tied because they aspire to be these global communities where free speech trumps basic human decency and empathy.

The greatest power of online discussion communities, in my experience, is that they don't aspire to be global. You set up a clubhouse with reasonable rules your community agrees upon, and anyone who can't abide by those rules needs to be gently shown the door.

Don't pull this wishy washy non-committal stuff that Twitter and Facebook do. Community rules are only meaningful if they are actively enforced. You need to be willing to say this to people, at times:

No, your behavior is not acceptable in our community; "free speech" doesn't mean we are obliged to host your content, or listen to you being a jerk to people. This is our house, and our rules.

If they don't like it, fortunately there's a whole Internet of other communities out there. They can go try a different house. Or build their own.

The goal isn't to slam the door in people's faces – visitors should always be greeted in good faith, with a hearty smile – but simply to acknowledge that in those rare but inevitable cases where good faith breaks down, a well-oiled front door will save your community.

[advertisement] How are you showing off your awesome? Create a Stack Overflow Careers profile and show off all of your hard work from Stack Overflow, Github, and virtually every other coding site. Who knows, you might even get recruited for a great new position!
Categories: Programming