Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Agile Concepts: Cadence

 In today’s business environment a plurality of organizations use a two week sprint cadence.

In today’s business environment a plurality of organizations use a two week sprint cadence.

In Agile, cadence is the number days or weeks in a sprint or release. Stated another way, it is the length of the team’s development cycle. In today’s business environment a plurality of organizations use a two week sprint cadence. The cadence that a project or organization selects is based on a number of factors that include: criticality, risk and the type of project.

A ‘critical’ IT project would be crucial, decisive or vital. Projects or any kind of work that can be defined as critical need to be given every change to succeed. Feedback is important for keeping critical projects pointed in the right direction. Projects that are highly important will benefit from gathering feedback early and often. The Agile cycle of planning, demonstrating progress and retrospect-ing is tailor-made to gather feedback and then act on that feedback. A shorter cycle leads to a faster cadence and quicker feedback.

Similarly, projects with higher levels of risk will benefit from faster feedback so that the team and the organization can evaluate whether the risk is being mitigated or whether risk is being converted into reality. Feedback reduces the potential for surprises therefore faster cadences is a good tool for reducing some forms of risk.

The type of project can have an impact on cadence. Projects that include hardware engineering or interfaces with heavyweight approval mechanisms will tend to have slower cadences. For example, a project I was recently asked about required two separate approval board reviews (one regulatory and the second security related). Both took approximately five working days. The length of time required for the reviews was not significantly impacted by the amount of work each group needed to approve. The team adopted a four-week cadence to minimize the potential for downtime while waiting for feedback and to reduce the rework risk of going forward without approval. Maintenance projects, on the other hand, can often leverage Kanban or Scrumban in more of a continuous flow approach (no time box).

Development cadence is not synonymous with release cadence. In many Agile techniques, the sprint cadence and the release cadence do not have to be the same. The Scaled Agile Framework Enterprise (SAFe) makes the point that teams should develop on a cadence, but release on demand. Many teams use a fast development cadence only to release in large chunks (often called releases). When completed work is released often either as a reflection of business need, an artifact in thinking from waterfall development and, in some rare cases, the organization’s operational environment.

Most projects will benefit from faster feedback. Shorter cycles, i.e. faster cadence, are an important tool for generating feedback and reducing risk. A faster cadence is almost always the right answer, unless you really don’t want to know what is happening while you can react.


Categories: Process Management

Neo4j: Modelling sub types

Mark Needham - Tue, 10/21/2014 - 00:08

A question which sometimes comes up when discussing graph data modelling is how you go about modelling sub/super types.

In my experience there are two reasons why we might want to do this:

  • To ensure that certain properties exist on bits of data
  • To write drill down queries based on those types

At the moment the former isn’t built into Neo4j and you’d only be able to achieve it by wiring up some code in a pre commit hook of a transaction event handler so we’ll focus on the latter.

The typical example used for showing how to design sub types is the animal kingdom and I managed to find a data set from Louiseville, Kentucky’s Animal Services which we can use.

In this case the sub types are used to represent the type of animal, breed group and breed. We then also have ‘real data’ in terms of actual dogs under the care of animal services.

We effectively end up with two graphs in one – a model and a meta model:

2014 10 20 22 32 31

The cypher query to create this graph looks like this:

LOAD CSV WITH HEADERS FROM "file:/Users/markneedham/projects/neo4j-subtypes/data/dogs.csv" AS line
MERGE (animalType:AnimalType {name: "Dog"})
MERGE (breedGroup:BreedGroup {name: line.BreedGroup})
MERGE (breed:Breed {name: line.PrimaryBreed})
MERGE (animal:Animal {id: line.TagIdentity, primaryColour: line.PrimaryColour, size: line.Size})
 
MERGE (animalType)<-[:PARENT]-(breedGroup)
MERGE (breedGroup)<-[:PARENT]-(breed)
MERGE (breed)<-[:PARENT]-(animal)

We could then write a simple query to find out how many dogs we have:

MATCH (animalType:AnimalType)<-[:PARENT*]-(animal)
RETURN animalType, COUNT(*) AS animals
ORDER BY animals DESC
==> +--------------------------------+
==> | animalType           | animals |
==> +--------------------------------+
==> | Node[89]{name:"Dog"} | 131     |
==> +--------------------------------+
==> 1 row

Or we could write a slightly more complex query to find the number of animals at each level of our type hierarchy:

MATCH path = (animalType:AnimalType)<-[:PARENT]-(breedGroup)<-[:PARENT*]-(animal)
RETURN [node IN nodes(path) | node.name][..-1] AS breed, COUNT(*) AS animals
ORDER BY animals DESC
LIMIT 5
==> +-----------------------------------------------------+
==> | breed                                     | animals |
==> +-----------------------------------------------------+
==> | ["Dog","SETTER/RETRIEVE","LABRADOR RETR"] | 15      |
==> | ["Dog","SETTER/RETRIEVE","GOLDEN RETR"]   | 13      |
==> | ["Dog","POODLE","POODLE MIN"]             | 10      |
==> | ["Dog","TERRIER","MIN PINSCHER"]          | 9       |
==> | ["Dog","SHEPHERD","WELSH CORGI CAR"]      | 6       |
==> +-----------------------------------------------------+
==> 5 rows

We might then decide to add an exercise sub graph which indicates how much exercise each type of dog requires:

MATCH (breedGroup:BreedGroup)
WHERE breedGroup.name IN ["SETTER/RETRIEVE", "POODLE"]
MERGE (exercise:Exercise {type: "2 hours hard exercise"})
MERGE (exercise)<-[:REQUIRES_EXERCISE]-(breedGroup);
MATCH (breedGroup:BreedGroup)
WHERE breedGroup.name IN ["TERRIER", "SHEPHERD"]
MERGE (exercise:Exercise {type: "1 hour gentle exercise"})
MERGE (exercise)<-[:REQUIRES_EXERCISE]-(breedGroup);

We could then query that to find out which dogs need to come out for 2 hours of hard exercise:

MATCH (exercise:Exercise {type: "2 hours hard exercise"})<-[:REQUIRES_EXERCISE]-()<-[:PARENT*]-(dog)
WHERE NOT (dog)<-[:PARENT]-()
RETURN dog
LIMIT 10
==> +-----------------------------------------------------------+
==> | dog                                                       |
==> +-----------------------------------------------------------+
==> | Node[541]{id:"664427",primaryColour:"BLACK",size:"SMALL"} |
==> | Node[542]{id:"543787",primaryColour:"BLACK",size:"SMALL"} |
==> | Node[543]{id:"584021",primaryColour:"BLACK",size:"SMALL"} |
==> | Node[544]{id:"584022",primaryColour:"BLACK",size:"SMALL"} |
==> | Node[545]{id:"664430",primaryColour:"BLACK",size:"SMALL"} |
==> | Node[546]{id:"535176",primaryColour:"BLACK",size:"SMALL"} |
==> | Node[567]{id:"613557",primaryColour:"WHITE",size:"SMALL"} |
==> | Node[568]{id:"531376",primaryColour:"WHITE",size:"SMALL"} |
==> | Node[569]{id:"613567",primaryColour:"WHITE",size:"SMALL"} |
==> | Node[570]{id:"531379",primaryColour:"WHITE",size:"SMALL"} |
==> +-----------------------------------------------------------+
==> 10 rows

In this query we ensured that we only returned dogs rather than breeds by checking that there was no incoming PARENT relationship. Alternatively we could have filtered on the Animal label…

MATCH (exercise:Exercise {type: "2 hours hard exercise"})<-[:REQUIRES_EXERCISE]-()<-[:PARENT*]-(dog:Animal)
RETURN dog
LIMIT 10

or if we wanted to only take the dogs out for exercise perhaps we’d have Dog label on the appropriate nodes.

People are often curious why labels don’t have super/sub types between them but I tend to use labels for simple categorisation – anything more complicated and we may as well use the built in power of the graph model!

The code is on github should you wish to play with it.

Categories: Programming

Your Community Door

Coding Horror - Jeff Atwood - Mon, 10/20/2014 - 20:32

What are the real world consequences to signing up for a Twitter or Facebook account through Tor and spewing hate toward other human beings?

Facebook reviewed the comment I reported and found it doesn't violate their Community Standards. pic.twitter.com/p9syG7oPM1

— Rob Beschizza (@Beschizza) October 15, 2014

As far as I can tell, nothing. There are barely any online consequences, even if the content is reported.

But there should be.

The problem is that Twitter and Facebook aim to be discussion platforms for "everyone", where every person, no matter how hateful and crazy they may be, gets a turn on the microphone. They get to be heard.

The hover text for this one is so good it deserves escalation:

I can't remember where I heard this, but someone once said that defending a position by citing free speech is sort of the ultimate concession; you're saying that the most compelling thing you can say for your position is that it's not literally illegal to express.

If the discussion platform you're using aims to be a public platform for the whole world, there are some pretty terrible things people can do and say to other people there with no real consequences, under the noble banner of free speech.

It can be challenging.

How do we show people like this the door? You can block, you can hide, you can mute. But what you can't do is show them the door, because it's not your house. It's Facebook's house. It's their door, and the rules say the whole world has to be accommodated within the Facebook community. So mute and block and so forth are the only options available. But they are anemic, barely workable options.

As we build Discourse, I've discovered that I am deeply opposed to mute and block functions. I think that's because the whole concept of Discourse is that it is your house. And mute and ignore, while arguably unavoidable for large worldwide communities, are actively dangerous for smaller communities. Here's why.

  • It allows you to ignore bad behavior. If someone is hateful or harassing, why complain? Just mute. No more problem. Except everyone else still gets to see a person being hateful or harassing to another human being in public. Which means you are now sending a message to all other readers that this is behavior that is OK and accepted in your house.

  • It puts the burden on the user. A kind of victim blaming — if someone is rude to you, then "why didn't you just mute / block them?" The solution is right there in front of you, why didn't you learn to use the software right? Why don't you take some responsibility and take action to stop the person abusing you? Every single time it happens, over and over again?

  • It does not address the problematic behavior. A mute is invisible to everyone. So the person who is getting muted by 10 other users is getting zero feedback that their behavior is causing problems. It's also giving zero feedback to moderators that this person should probably get an intervention at the very least, if not outright suspended. It's so bad that people are building their own crowdsourced block lists for Twitter.

  • It causes discussions to break down. Fine, you mute someone, so you "never" see that person's posts. But then another user you like quotes the muted user in their post, or references their @name, or replies to their post. Do you then suppress just the quoted section? Suppress the @name? Suppress all replies to their posts, too? This leaves big holes in the conversation and presents many hairy technical challenges. Given enough personal mutes and blocks and ignores, all conversation becomes a weird patchwork of partially visible statements.

  • This is your house and your rules. This isn't Twitter or Facebook or some other giant public website with an expectation that "everyone" will be welcome. This is your house, with your rules, and your community. If someone can't behave themselves to the point that they are consistently rude and obnoxious and unkind to others, you don't ask the other people in the house to please ignore it – you ask them to leave your house. Engendering some weird expectation of "everyone is allowed here" sends the wrong message. Otherwise your house no longer belongs to you, and that's a very bad place to be.

I worry that people are learning the wrong lessons from the way Twitter and Facebook poorly handle these situations. Their hands are tied because they aspire to be these global communities where free speech trumps basic human decency and empathy.

The greatest power of online discussion communities, in my experience, is that they don't aspire to be global. You set up a clubhouse with reasonable rules your community agrees upon, and anyone who can't abide by those rules needs to be gently shown the door.

Don't pull this wishy washy non-committal stuff that Twitter and Facebook do. Community rules are only meaningful if they are actively enforced. You need to be willing to say this to people, at times:

No, your behavior is not acceptable in our community; "free speech" doesn't mean we are obliged to host your content, or listen to you being a jerk to people. This is our house, and our rules.

If they don't like it, fortunately there's a whole Internet of other communities out there. They can go try a different house. Or build their own.

The goal isn't to slam the door in people's faces – visitors should always be greeted in good faith, with a hearty smile – but simply to acknowledge that in those rare but inevitable cases where good faith breaks down, a well-oiled front door will save your community.

[advertisement] How are you showing off your awesome? Create a Stack Overflow Careers profile and show off all of your hard work from Stack Overflow, Github, and virtually every other coding site. Who knows, you might even get recruited for a great new position!
Categories: Programming

What's New in Android 5.0 Lollipop

Android Developers Blog - Mon, 10/20/2014 - 20:18

By Ankur Kotwal, Developer Advocate

Android 5.0 Lollipop is the biggest update of Android to date, introducing an all new visual style, improved performance, and much more. Android 5.0 Lollipop also extends across screens big and small, including phones, tablets, wearables, TVs and cars, to give your users access to information when they need it most.

To get you started on developing and testing on Android 5.0 Lollipop, here are some of the developer highlights with links to related videos and documentation.

User experience
  • Material design for the multiscreen world — Material Design is a new approach for designing apps in today’s multi-device world that takes a comprehensive strategy to visual, motion, and interaction design across a number of platforms and form factors. Android 5.0 brings Material Design to the platform, with a full set of tools for implementing material design in your apps. The system is incredibly flexible, allowing your app to express its individual character and brand with bold colors and a variety of responsive UI patterns and themeable elements.
  • Enhanced notifications — New lockscreen notifications let you surface content, updates, and actions to users at a glance, without needing to unlock their device. Heads-up notifications let you display content and actions in a small floating window managed by the system, no matter which app is in the foreground. Notifications are refreshed for Material Design and you can use accent colors to express your brand.
  • Concurrent documents in Overview — Now you can organize your app by tasks and present these concurrently as individual “documents” on the Overview screen. For example, instant messaging apps could declare each chat as a separate document. Users can flip through these on the Overview screen to find the specific chat they want and jump straight to it.
Performance
  • Android Runtime (ART) — Android 5.0 runs exclusively on the ART runtime. ART offers ahead-of-time (AOT) compilation, more efficient garbage collection, and improved development and debugging features. In many cases it improves performance of the device, without you having to change your code.
  • 64-bit support — Support for 64-bit ABIs provides additional address space and improved performance with certain compute workloads. Apps written in the Java language can run immediately on 64-bit architectures with no modifications required. NDK r10c includes 64-bit support, for apps and games using native code.
  • Project Volta — New tools and APIs help you build battery-efficient apps. Battery Historian, a tool included in the SDK, lets you visualize power events over time and understand how your app is using battery. The JobScheduler API lets you set the conditions under which your background tasks and other jobs should run, such as when the device is idle or connected to an unmetered network or to a charger, to minimize battery impact. More in this I/O video.
  • OpenGL ES 3.1 and Android Extension Pack — With OpenGL ES 3.1, you get compute shaders, stencil textures, and texture gather for your games. Android Extension Pack (AEP) is a new set of extensions to OpenGL ES that bring desktop-class graphics to Android including tessellation and geometry shaders, and use ASTC texture compression across GPU technologies. More on what's new for game developers in this DevBytes video.
  • WebView updates — We’ve updated WebView to support WebRTC, WebAudio and WebGL will be supported. WebView also includes native support for all of the Web Components specifications: Custom Elements, Shadow DOM, HTML Imports, and Templates. WebView is now unbundled from the system and will be regularly updated through Google Play.
Workplace
  • Managed provisioning and unified view of apps — to make it easier for employees to have a single device for personal and work use, framework enhancements offer a unified view of apps, notifications & recents across work apps and personal apps. Profile owner APIs, in the workplace context, let administrators create and manage work profiles and defined as part of a new managed provisioning process. More in this I/O video.
Media
  • Advanced camera capabilities — A new camera API gives you new capabilities for advanced image capture and processing. On supported devices, your app can capture uncompressed YUV capture at full 8 megapixel resolution at 30 FPS. You can also capture raw sensor data and control parameters such as exposure time, ISO sensitivity, and frame duration, on a per-frame basis.
  • Audio improvements — The sound architecture has been enhanced, with lower input latency in OpenSL, the addition of multichannel-mixing, and USB digital audio mode support. More in this I/O video.
Connectivity
  • BLE Peripheral Mode — Android devices can now function in Bluetooth Low Energy (BLE) peripheral mode. Apps can use this capability to broadcast their presence to nearby devices — for example, you can now build apps that let a device function as a beacon and transmit data to another BLE device. More in this I/O video.
  • Multi-networking — Apps can dynamically request networks based on capabilities such as metered or unmetered. This is useful when you want to use a specific network, such as cellular. Apps can also request platform to re-evaluate networks for an internet connection. This is useful when your app sees unusually high latency on a particular network, it can enable the platform to switch to a better network (if available) sooner with a graceful handoff.
Get started!

You can get started developing and testing on Android 5.0 right away by downloading the Android 5.0 Platform (API level 21), as well as the SDK Tools, Platform Tools, and Support Package from the Android SDK Manager.

Check out the DevByte video below for more of what’s new in Lollipop!



Join the discussion on
+Android Developers


Categories: Programming

Testing on the Toilet: Writing Descriptive Test Names

Google Testing Blog - Mon, 10/20/2014 - 19:22
by Andrew Trenk

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

How long does it take you to figure out what behavior is being tested in the following code?

@Test public void isUserLockedOut_invalidLogin() {
authenticator.authenticate(username, invalidPassword);
assertFalse(authenticator.isUserLockedOut(username));

authenticator.authenticate(username, invalidPassword);
assertFalse(authenticator.isUserLockedOut(username));

authenticator.authenticate(username, invalidPassword);
assertTrue(authenticator.isUserLockedOut(username));
}

You probably had to read through every line of code (maybe more than once) and understand what each line is doing. But how long would it take you to figure out what behavior is being tested if the test had this name?

isUserLockedOut_lockOutUserAfterThreeInvalidLoginAttempts

You should now be able to understand what behavior is being tested by reading just the test name, and you don’t even need to read through the test body. The test name in the above code sample hints at the scenario being tested (“invalidLogin”), but it doesn’t actually say what the expected outcome is supposed to be, so you had to read through the code to figure it out.

Putting both the scenario and the expected outcome in the test name has several other benefits:

- If you want to know all the possible behaviors a class has, all you need to do is read through the test names in its test class, compared to spending minutes or hours digging through the test code or even the class itself trying to figure out its behavior. This can also be useful during code reviews since you can quickly tell if the tests cover all expected cases.

- By giving tests more explicit names, it forces you to split up testing different behaviors into separate tests. Otherwise you may be tempted to dump assertions for different behaviors into one test, which over time can lead to tests that keep growing and become difficult to understand and maintain.

- The exact behavior being tested might not always be clear from the test code. If the test name isn’t explicit about this, sometimes you might have to guess what the test is actually testing.

- You can easily tell if some functionality isn’t being tested. If you don’t see a test name that describes the behavior you’re looking for, then you know the test doesn’t exist.

- When a test fails, you can immediately see what functionality is broken without looking at the test’s source code.

There are several common patterns for structuring the name of a test (one example is to name tests like an English sentence with “should” in the name, e.g., shouldLockOutUserAfterThreeInvalidLoginAttempts). Whichever pattern you use, the same advice still applies: Make sure test names contain both the scenario being tested and the expected outcome.

Sometimes just specifying the name of the method under test may be enough, especially if the method is simple and has only a single behavior that is obvious from its name.

Categories: Testing & QA

Facebook Mobile Drops Pull For Push-based Snapshot + Delta Model

We've learned mobile is different. In If You're Programming A Cell Phone Like A Server You're Doing It Wrong we learned programming for a mobile platform is its own specialty. In How Facebook Makes Mobile Work At Scale For All Phones, On All Screens, On All Networks we learned bandwidth on mobile networks is a precious resource. 

Given all that, how do you design a protocol to sync state (think messages, comments, etc.) between mobile nodes and the global state holding servers located in a datacenter?

Facebook recently wrote about their new solution to this problem in Building Mobile-First Infrastructure for Messenger. They were able to reduce bandwidth usage by 40% and reduced by 20% the terror of hitting send on a phone.

That's a big win...that came from a protocol change.

Facebook Messanger went from a traditional notification triggered full state pull:

Categories: Architecture

Python: Converting a date string to timestamp

Mark Needham - Mon, 10/20/2014 - 16:53

I’ve been playing around with Python over the last few days while cleaning up a data set and one thing I wanted to do was translate date strings into a timestamp.

I started with a date in this format:

date_text = "13SEP2014"

So the first step is to translate that into a Python date – the strftime section of the documentation is useful for figuring out which format code is needed:

import datetime
 
date_text = "13SEP2014"
date = datetime.datetime.strptime(date_text, "%d%b%Y")
 
print(date)
$ python dates.py
2014-09-13 00:00:00

The next step was to translate that to a UNIX timestamp. I thought there might be a method or property on the Date object that I could access but I couldn’t find one and so ended up using calendar to do the transformation:

import datetime
import calendar
 
date_text = "13SEP2014"
date = datetime.datetime.strptime(date_text, "%d%b%Y")
 
print(date)
print(calendar.timegm(date.utctimetuple()))
$ python dates.py
2014-09-13 00:00:00
1410566400

It’s not too tricky so hopefully I shall remember next time.

Categories: Programming

4 Biggest Reasons Why Software Developers Suck at Estimation

Making the Complex Simple - John Sonmez - Mon, 10/20/2014 - 15:00

Estimation is difficult. Most people aren’t good at it–even in mundane situations. For example, when my wife asks me how much longer it will take me to fix some issue I’m working on or to head home, I almost always invariably reply “five minutes.” I almost always honestly believe it will only take five minutes, […]

The post 4 Biggest Reasons Why Software Developers Suck at Estimation appeared first on Simple Programmer.

Categories: Programming

Mix Mashup

NOOP.NL - Jurgen Appelo - Mon, 10/20/2014 - 10:08
mashup-square

<promotion>

The MIX Mashup is a gathering of the vanguard of management innovators—pioneering leaders, courageous hackers, and agenda-setting thinkers from every realm of endeavor

The post Mix Mashup appeared first on NOOP.NL.

Categories: Project Management

Decision Making Without Estimates?

Herding Cats - Glen Alleman - Mon, 10/20/2014 - 02:56

In a recent post there are 5 suggestions of how decisions about software development can be made in the absence of estimating the cost, duration, and impact of these decisions. Before looking at each in more detail, let's see what the basis is of these suggestions from the post.

A decision-making strategy is a model, or an approach that helps you make allocation decisions (where to put more effort, or spend more time and/or money). However I would add one more characteristic: a decision-making strategy that helps you chose which software project to start must help you achieve business goals that you define for your business. More specifically, a decision-making strategy is an approach to making decisions that follows your existing business strategy.

Decision making in the presence of the allocation of limited resources is called Microeconomics. These decision - in the presence of limited resources - involves opportunity costs. That is what is the cost of NOT choosing one of the alternatives - the allocations? To know these means we need to know something about the outcome of NOT choosing. We can't wait to do the work, we need to know what happens - to some level of confidence - if we DON'T Do something. How can we do this? We need estimate what happens if we don't choose one of the possible allocations, since all the outcomes are in the future.

But first, the post started with suggesting the five approaches are part of Strategy. I'm familiar with strategy making in the domain of software development, having been schooled by two Balanced Scorecard leaders while working as a program manager for a large Department of Energy site, where we pioneered the use of agile development in the presence of highly formal nuclear safety and safeguards applications.

What is Strategy?

Before proceeding with the 5 suggestions, let's look at what strategy is, since it is common to confuse strategy with tactics.

Strategy is creating fit among a firm's activities. The success of a strategy depends on doing many things well – not just a few. The things that are done well must operate within a close nit system. If there is no fit among the activities, there is no distinctive strategy and little to sustain the strategic deployment process. Management then reverts to the simpler task of overseeing independent functions. When this occurs, operational effectiveness determines the relative performance of the firm. 

Improving operational effectiveness is a necessary part of management, but it is not strategy. In confusing the two, managers will be unintentionally backed into a way of thinking about competition that drives the business processes (IT) away from the strategic support and toward the tactical improvement of operational effectiveness.

Managers must be able to clearly distinguish operational effectiveness from strategy. Both are essential, but the two agendas are different. The operational effectiveness agenda involves continual improvement business processes that have no trade–offs associated with them. The operational effectiveness agenda is the proper place for constant change, flexibility, and relentless efforts to achieve best practices.

In contrast, the strategic agenda is the place for making clear trade offs and tightening the fit between the participating business components. Strategy involves the continual search for ways to reinforce and extend the company’s position in the market place. 

“What is Strategy,” M. E. Porter, Harvard Business Review, Volume 74, Number 6, pp. 61–78.

Using Porter's notion of strategy in a business context, the post seems more about tactics. But ignoring that for the moment, let's look further into the ideas presented in the post.

I'm going to suggest that each of the five decision process described in the post are the proper ones - ones with many approaches - but each has ignored the underlying principles of Microeconomics. This principle is that decisions about future outcomes are informed by the opportunity cost and that cost requires - mandated actually since they're in the future - an estimate. This is the basis of Real Options, forecasting, and the very core of business decision making in the presence of uncertainty.

The post then asks

  1. How well does this decision proposal help us reach our business goals?
  2. Does the risk profile resulting from this decision fit our acceptable risk profile?

 
Screen Shot 2014-10-18 at 11.02.18 AMThe 1st question needs another question to be answered. What are our business goals and what are the units of measure of these goals. In order to answer the 1st question we need a steering target to know how we are proceeding toward that goal.

The second question is about risk. All risk comes from uncertainty. Two types of uncertainty exist on projects:

Reducible (Epistemic) and Irreducible (Aleatory). Epistemic uncertainty comes from lack of knowledge. Epistemology is the study of the acquisition of knowledge. We can pay money to buy down this lack of knowledge. That is Epistemic uncertainty can be reduced with work. Risk reduction work. But this leaves open how much time, budget, and performance margin is needed?

ANSWER: We need an Estimate of the Probability of the Risk Coming True. Estimating the Epistemic risk probability of occurrence, the cost and schedule for the reduction efforts, and the probability of the residual risk is done with probabilistic model. There are several and many tools. But estimating all three components: occurrence, impact, effort to mitigate, and residual risk is required.

Aleatory uncertainty comes from the naturally occurring variances of the underlying processes. The only way to reduce the risk arising from Aleatory uncertainty is with margin. Cost Margin, Schedule Margin, Performance Margin. But this leaves open how do we know how margin?

ANSWER: We need to estimate the needed margin from the Probability Distribution Function of the Underlying Statistical Process. Estimating the needed aleatory margin (cost, schedule, and performance) can be done with Monte Carlo Simulation or Method of Moments.


Probability and StatisticsSo one more look at the suggestions before examining  further the 5 ways of making decisions in the absence of estimating their impacts and the cost to achieve those impacts.

All decisions have inherent risks, and we must consider risks before elaborating on the different possible decision-making strategies. If you decide to invest in a new and shiny technology for your product, how will that affect your risk profile?

All risk is probabilistic, based on underlying statistical processes. Either the process of lack of knowledge (Epistemic) or the process of natural variability (Aleatory). In the consideration of risk we must incorporate these probability and statistical behaviours in our decision making activities. Since the outcomes of these processes occur in the future, we need to estimate them based on  knowledge - or lack of knowledge - of their probability of occurrence. For the naturally occurring variances that have occurred in the past we need to know how they might occur in the future. To answer these questions, we need a probabilistic model. This model based on the underlying statistical processes. And since the  underlying model is statistical, we need to estimate the impact of this behaviour.

Let's Look At The Five Decision Making Processes

1. Do the most important work firstIf you are starting to implement a new strategy, you should allocate enough teams, and resources to the work that helps you validate and fine tune the selected strategy. This might take the form of prioritizing work that helps you enter a new segment, or find a more valuable niche in your current segment, etc. The focus in this decision-making approach is: validating the new strategy. Note that the goal is not "implement new strategy", but rather "validate new strategy". The difference is fundamental: when trying to validate a strategy you will want to create short-term experiments that are designed to validate your decision, instead of planning and executing a large project from start to end. The best way to run your strategy validation work is to the short-term experiments and re-prioritize your backlog of experiments based on the results of each experiment.

    • Important work first is good strategy. But importance needs a unit of measure. That unit of measure should be found in the strategy. This is the purpose of the strategy. But the strategy needs units of measure as well. Simply saying do important work first doesn't provide a way to make that decision.
    • The notion of validating versus implementing the strategy is artificial. A read of the Strategy Making literature will clear this up. Strategy for business and especially strategy in IT is a very mature domain, with a long history.
    • One approach to generating the units of measure from the strategy is Balanced Score Card, where strategic objectives are mapped to Performance Goals, then to Critical Success Factors, then to the Key Performance Indicators. The way to do that is with a Strategy Map, shown below.
    • This is the use of strategy as Porter defines it. 

Screen Shot 2014-10-18 at 11.28.31 AM

2. Do the Highest Techncial Risk FirstWhen you want to transition to a new architecture or adopt a new technology, you may want to start by doing the work that validates that technical decision. For example, if you are adopting a new technology to help you increase scalability of your platform, you can start by implementing the bottleneck functionality of your platform with the new technology. Then test if the gains in scalability are in line with your needs and/or expectations. Once you prove that the new technology fulfills your scalability needs, you should start to migrate all functionality to the new technology step by step in order of importance. This should be done using short-term implementation cycles that you can easily validate by releasing or testing the new implementation.

    • This is likely dependent on the technical and programmatic architecture of the project or product. 
    • We may want to establish a platform on which to build riskier components. A platform that is known and trusted, stable, bug free - before embarking on any high risk development.
    • High risk may mean high cost. So doing risky things first have consequences. What are those consequences? One is risking the budget before it's clear we have a stable platform, in which to build follow on capabilities. Knowing soemthing is high risk may mean high cost, and this requires estimating something that will occur in the future - the cost to achieve and the cost of the consequences.
    • So doing highest technical risk first, is itself a risk that needs to be assessed. Without this assessment, this suggestion has no way of being tested in practice.

3. Do the Easiest Work FirstSuppose you just expanded your team and want to make sure they get to know each other and learn to work together. This may be due to a strategic decision to start a new site in a new location. Selecting the easiest work first will give the new teams an opportunity to get to know each other, establish the processes they need to be effective, but still deliver concrete, valuable working software in a safe way.

    • This is also dependent on the technical and programmatic architecture of the project or product.
    • It's also counter intuitive to #2. Since High Risk is not likely to be the easiest to do.
    • These assessments between risk and work sequence require some sort of trade space analysis, and since the outcomes and their impacts in in the future, estimates these is part of the Analysis of Alternatives approach for any non-trivial project where Systems Engineering guides the work processes.

4. Do the legal Requirements FirstIn medical software there are regulations that must be met. Those regulations affect certain parts of the work/architecture. By delivering those parts first you can start the legal certification for your product before the product is fully implemented, and later - if needed - certify the changes you may still need to make to the original implementation. This allows you to improve significantly the time-to-market for your product. A medical organization that successfully adopted agile, used this project decision-making strategy with a considerable business advantage as they were able to start selling their product many months ahead of the scheduled release. They were able to go to market earlier because they successfully isolated and completed the work necessary to certify the key functionality of their product. Rather then trying to predict how long the whole project would take, they implemented the key legal requirements first, then started to collect feedback about the product from the market - gaining a significant advantage over their direct competitors.

    • Medical Devices are regulated with 21CFR Parts 800-1299. The suggestion doesn't reference any regulations for medical software, which ranges for patient check in at the front desk to surgical devices controlled by software.
    • Developing 21 CFR Software components first may not be possible until the foundation on which they are build is established, tested, and verified. 
    • This means - Quality Planning, Requirements, Design, Construction or Coding, Testing by the Software Developer, User Site Testing, and Maintenance and Software Changes. 
    • Once the plan - a required plan for validation - is in place, the order of the development will be more visible. 
    • Deciding which components to develop, just because they are impacted by Legal Requirements usually means ALL the components. So this approach - Do The Legal Requirements First - usually means do them all.
    • The notion of - Rather then trying to predict how long the whole project would take, they implemented the key legal requirements first, then started to collect feedback about the product from the marketfails to describe how they knew when they would be ready to test out these ideas. And most importantly how they were able to go to market in the absence of the certification.
    • As well what type of testing - early trials, full 21 CFR release, human applciations, animal testing, etc. is not stated. With some experience in the medical device - devices.
    • A colleague is the CEO of http://acumyn.com/  I'll confirm the processes of validating software from him.

5. Liability Driven Investment - This approach is borrowed from a stock exchange investment strategy that aims to tackle a problem similar to what every bootstrapped business faces: what work should we do now, so that we can fund the business in the near future? In this approach we make decisions with the aim of generating the cash flows needed to fund future liabilities.

    • It's not clear why this is called liability. Liability on the balance sheet is an obligation to pay. Deciding what work to do now to generate needed revenue is certainly a strategy. Value Stream Mapping or Impact Mapping is a way to define that. But liability seems to be the wrong term.
    • Not sure how that connects with a Securities Exchange and what problem they are solving using the term liabilities. Shorts are obligations to pay in the future when the short is calledPuts and Calls are terms used in stock trading, but developing software products is not trading. The Real Options used by the poster in the past don't exercise the Option, so the liability to pay doesn't seem to connect here.

References

  1. Risk Informed Decision Handbook, NASA/SP-2010-576 Version 1.0 April 2010.
  2. General Principles of Software Validation; Final Guidance for Industry and FDA Staff, US Food and Drug Administration.

  3. Strategy Maps: Converting Intangible Assets into Tangible Outcomes, Robert Kaplan and David Norton, Harvard Business Press.
  4. Estimating Optimal Decision Rules in Presence of Model Parameter Uncertainty, Christopher Joseph Bennett, Vanderbilt University, June 6, 2012.
Related articles Making Choices in the Absence of Information
Categories: Project Management

Quote of the Day

Herding Cats - Glen Alleman - Mon, 10/20/2014 - 01:08

To achieve great things, two things are needed; a plan, and not quite enough time.
− Leonard Bernstein

Plan of the Day (CV-41)

The notion that planning is a waste is common in domains where mission critical, high risk - high reward, must work, type projects do not exist.

Notice the Plan and the Planned delivery date. The notion that  deadlines  are somehow evil, goes along with the lack on understanding that business needs a set of capabilities to be in place on a date in order to start booking the value in the general ledger.

Plans are strategies. Strategies are a hypothesis. The Hypothesis is tested with  experiments. Experiments show from actual data what the outcome is of the work. These outcomes are used as feedback to take corrective actions at the strategic and tactical level of the project.

This is called Closed Loop Control. Set the strategy, define the units of measure for the desired outcome - Measures of Effectiveness and Measures of Performance. Perform work as assess these measures. Determine the variance between the planned outcomes and the needed outcomes. Take corrective action by adjusting the plan to keep the project moving toward the strategic goals. For Closed Loop Control, we need

  • A steering target for some future state.
  • A measure of the current state.
  • The variance between current and future.
  • Corrective actions to put the project back on track toward the desired state.

Control systems from Glen Alleman Related articles Project Risk Management, PMBOK, DoD PMBOK and Edmund Conrow's Book
Categories: Project Management

SPaMCAST 312 – Alex Neginsky, A Leader and Practitioner’s View of Agile

www.spamcast.net

http://www.spamcast.net

Listen to the SPaMCAST 312 now!

SPaMCAST 312 features our interview with Alex Neginsky.  Alex is a real leader and practitioner in a real company that has really applied Agile.  Alex shares pragmatic advice about to how practice Agile in the real world!

Alex’s bio:

Alex Neginsky began his career in the software industry at the age of 16 as a Software Engineer for Ultimate Software. He earned his Bachelor’s degree in Computer Science at Florida Atlantic University in 2006. By age 27, Alex obtained his first software patent.

Alex has been at MTech, a division of Newmarket International, since 2011. As the Director of Development he brings 15 years of experience, technical skills, and management capabilities. Alex manages highly skilled software professionals across several teams stationed all over Eastern Europe and the United States. He serves as the liaison between MTech Development and the industry. During his tenure with the MTech division of Newmarket, Alex has been pivotal in the adoption of the complete software development lifecycle and has spearheaded the adoption of leading Agile Development Methodologies such as Scrum and Kanban. This has yielded higher velocity and better efficiencies throughout the organization.

Contact Alex at aneginsky@newmarketinc.com

LinkedIn

If you have the right stuff and are interested in a joining Newmarket then check out:

http://www.newmarketinc.com/careers/join-newmarket

Call to action!

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com.  What will we do with this list?  We have two ideas.  First, we will compile a list and publish it on the blog.  Second, we will use the list to drive “Re-read” Saturday. Re-read Saturday is an exciting new feature we will begin in November. More on this new feature next week. So feel free to choose you platform and send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Next

SPaMCAST 313 features our essay on developing an initial backlog.  Developing an initial backlog is an important step to get projects going and moving in the right direction. If a project does not start well, it is hard for it to end well.  We will provide techniques to help you begin well!

Upcoming Events

DCG Webinars:

Agile Risk Management – It Is Still Important! October 24, 2014 11:30 EDT

Has the adoption of Agile techniques magically erased risk from software projects? Or, have we just changed how we recognize and manage risk?  Or, more frighteningly, by changing the project environment through adopting Agile techniques, have we tricked ourselves into thinking that risk has been abolished?

Upcoming Conferences:

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 312 - Alex Neginsky, A Leader and Practitioner’s View of Agile

Software Process and Measurement Cast - Sun, 10/19/2014 - 22:00

SPaMCAST 312 features our interview with Alex Neginsky.  Alex is a real leader and practitioner in a real company that has really applied Agile.  Alex shares pragmatic advice about to how practice Agile in the real world!  

Alex’s bio:

Alex Neginsky began his career in the software industry at the age of 16 as a Software Engineer for Ultimate Software. He earned his Bachelor's degree in Computer Science at Florida Atlantic University in 2006. By age 27, Alex obtained his first software patent. 

Alex has been at MTech, a division of Newmarket International, since 2011. As the Director of Development he brings 15 years of experience, technical skills, and management capabilities. Alex manages highly skilled software professionals across several teams stationed all over Eastern Europe and the United States. He serves as the liaison between MTech Development and the industry. During his tenure with the MTech division of Newmarket, Alex has been pivotal in the adoption of the complete software development lifecycle and has spearheaded the adoption of leading Agile Development Methodologies such as Scrum and Kanban. This has yielded higher velocity and better efficiencies throughout the organization. 

Contact Alex at aneginsky@newmarketinc.com

LinkedIn

If you have the right stuff and are interested in a joining Newmarket then check out:

http://www.newmarketinc.com/careers/join-newmarket

Call to action!

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com.  What will we do with this list?  We have two ideas.  First, we will compile a list and publish it on the blog.  Second, we will use the list to drive “Re-read” Saturday. Re-read Saturday is an exciting new feature we will begin in November. More on this new feature next week. So feel free to choose you platform and send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Next

SPaMCAST 313 features our essay on developing an initial backlog.  Developing an initial backlog is an important step to get projects going and moving in the right direction. If a project does not start well, it is hard for it to end well.  We will provide techniques to help you begin well!

Upcoming Events

DCG Webinars:

Agile Risk Management – It Is Still Important! October 24, 2014 11:30 EDT

Has the adoption of Agile techniques magically erased risk from software projects? Or, have we just changed how we recognize and manage risk?  Or, more frighteningly, by changing the project environment through adopting Agile techniques, have we tricked ourselves into thinking that risk has been abolished?

Upcoming Conferences:

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

Android 5.0 Lollipop SDK and Nexus Preview Images

Android Developers Blog - Sun, 10/19/2014 - 07:12

Two more weeks!

By Jamal Eason, Product Manager, Android

At Google I/O last June, we gave you an early version of Android 5.0 with the L Developer Preview, running on Nexus 5, Nexus 7 and Android TV. Over the course of the L Developer Preview program, you’ve given us great feedback and we appreciate the engagement from you, our developer community. Thanks!

This week, we announced Android 5.0 Lollipop. Starting today, you can download the full release of the Android 5.0 SDK, along with updated developer images for Nexus 5, Nexus 7 (2013), ADT-1, and the Android emulator.

The first set of devices to run this new version of Android -- Nexus 6, Nexus 9, and Nexus Player -- will be available in early November. In the same timeframe, we'll also roll out the Android 5.0 update worldwide to Nexus 4, 5, 7 (2012 & 2013), and 10 devices, as well as to Google Play edition devices.

Therefore, now is the time to test your apps on the new platform. You have two more weeks to get ready!

What’s in Lollipop?

Android 5.0 Lollipop introduces a host of new APIs and features including:

There's much more, so check out the Android 5.0 platform highlights for a complete overview.

What’s in the Android 5.0 SDK?

The Android 5.0 SDK includes updated tools and new developer system images for testing. You can develop against the latest Android platform using API level 21 and take advantage of the updated support library to implement Material Design as well as the leanback user interface for TV apps.

You can download these components through the Android SDK Manager and develop your app in Android Studio:

  • Android 5.0 SDK Platform & Tools
  • Android 5.0 Emulator System Image - 32-bit & 64-bit (x86)
  • Android 5.0 Emulator System Image for Android TV (32-bit)
  • Android v7 appcompat Support Library for Material Design theme backwards capability
  • Android v17 leanback library for Android TV app support

For developers using the Android NDK for native C/C++ Android apps we have:

For developers on Android TV devices we have:

  • Android 5.0 system image over the air (OTA) update for ADT-1 Developer Kit. OTA updates will appear over the next few days.

Similar to our previous release of the preview, we are also providing updated system image downloads for Nexus 5 & Nexus 7 (2013) devices to help with your testing as well. These images support the Android 5.0 SDK, but only have the minimal apps pre-installed in order to enable developer testing:

  • Nexus 5 (GSM/LTE) “hammerhead” Device System Image
  • Nexus 7 (2013) - (Wifi) “razor” Device System Image

For the developer preview versions, there will not be an over the air (OTA) update. You will need to wipe and reflash your developer device to use the latest developer preview versions. If you want to receive the official consumer OTA update in November and any other official updates, you will have to have a factory image on your Nexus device.

Validate your apps with the Android 5.0 SDK

With the consumer availability of Android 5.0 and the Nexus 6, Nexus 9, and Nexus Player right around the corner, here are a few things you should do to prepare:

  1. Get the emulator system images through the SDK Manager or download the Nexus device system images.
  2. Recompile your apps against Android 5.0 SDK, especially if you used any preview APIs. Note: APIs have changed between the preview SDK and the final SDK.
  3. Validate that your current Android apps run on the new API 21 level with ART enabled. And if you use the NDK for your C/C++ Android apps, validate against the 64-bit emulator. ART is enabled by default on API 21 & new Android devices with Android 5.0.

Once you validate your current app, explore the new APIs and features for Android 5.0.

Migrate Your Existing App to Material Design

Android 5.0 Lollipop introduces Material Design, which enables your apps to adopt a bold, colorful, and flexible design, while remaining true to a small set of key principles that guide user interaction across multiple screens and devices.

After making sure your current apps work with Android 5.0, now is the time to enable the Material theme in your app with the AppCompat support library. For quick tips & recommendations for making your app shine with Material Design, check out our Material Design guidelines and tablet optimization tips. For those of you new to Material Design, check out our Getting Started guide.

Get your apps ready for Google Play!

Starting today, you can publish your apps that are targeting Android 5.0 Lollipop to Google Play. In your app manifest, update android:targetSdkVersion to "21", test your app, and upload it to the Google Play Developer Console.

Starting November 3rd, Nexus 9 will be the first device available to consumers that will run Android 5.0. Therefore, it is a great time to publish on Google Play, once you've updated and tested your app. Even if your apps target earlier versions of Android, take a few moments to test them on the Android 5.0 system images, and publish any updates needed in advance of the Android 5.0 rollout.

Stay tuned for more details on the Nexus 6 and Nexus 9 devices, and how to make sure your apps look their best on them.

Next up, Android TV!

We also announced the first consumer Android TV device, Nexus Player. It’s a streaming media player for movies, music and videos, and also a first-of-its-kind Android gaming device. Users can play games on their HDTVs with a gamepad, then keep playing on their phones while they’re on the road. The device is also Google Cast-enabled, so users can cast your app from their phones or tablets to their TV.

If you’re developing for Android TV, watch for more information on November 3rd about how to distribute your apps to Android TV users through the Google Play Developer Console. You can start getting your app ready by making sure it meets all of the TV Quality Guidelines.

Get started with Android 5.0 Lollipop platform

If you haven’t had a chance to take a look at this new version of Android yet, download the SDK and get started today. You can learn more about what’s new in the Android 5.0 platform highlights and get all the details on new APIs and changed behaviors in the API Overview. You can also check out the latest DevBytes videos to learn more about Android 5.0 features.

Enjoy this new release of Android!

Join the discussion on

+Android Developers
Categories: Programming

Agile Risk Management: The Need Still Exists

ROAM

 

Hand Drawn Chart Saturday

I once asked the question, “Has the adoption of Agile techniques magically erased risk from software projects?” The obvious answer is no, however Agile has provided a framework and tools to reduce the overall level of performance variance. Even if we can reduce risk by damping the variance driven by complexity, the size of the work, process discipline and people, we still need to “ROAM” the remaining risks. ROAM is a model that helps teams identify risks. Applying the model, a team would review each risk and classify then as:

  • Resolved, the risk has been answered and avoided or eliminated.
  • Owned, someone has accepted the responsibility for doing something about the risk.
  • Accepted, the risk has been understood and the team has agreed that nothing will be done about it.
  • Mitigated, something has been done so that the probability or potential impact is reduced.

When we consider any risk we need to recognize the two attributes: impact and probability.  Impact is what will happen if the risk becomes something tangible. Impacts are typically monetized or stated as the amount of effort needed to correct the problem if it occurs. The size of the impact can vary depending on when the risk occurs. For example, if we suddenly decide that the system architecture will not scale to the level required during sprint 19, the cost in rework would be higher than if that fact were discovered in sprint 2. Probability is the likelihood a risk will become an issue. In a similar manner to impact, probability varies over time.

We defined risk as “any uncertain event that can have an impact on the success of a project.” Does using Agile change our need to recognize and mitigate risk?  No, but instead of a classic risk management plan and a risk log, a more Agile approach to risk management might be generating additional user stories. While Agile techniques reduce some forms of risk we still need to be vigilant. Adding risks to the project or program backlog will help ensure there is less chance of variability and surprises.


Categories: Process Management

What Is Systems Architecture And Why Should We Care?

Herding Cats - Glen Alleman - Sat, 10/18/2014 - 20:10

If we were setting out to build a home, we would first lay out the floor plans, grouping each room by function and placing structural items within each room according to their best utility. This is not an arbitrary process – it is architecture. Moving from home design to IT system design does not change the process. Grouping data and processes into information systems creates the rooms of the system architecture. Arranging the data and processes for the best utility is the result of deploying an architecture. Many of the attributes of building architecture are applicable to system architecture. Form, function, best use of resources and materials, human interaction, reuse of design, longevity of the design decisions, robustness of the resulting entities are all attributes of well designed buildings and well designed computer systems. [1]

In general, an architecture is a set of rules that defines a unified and coherent structure consisting of constituent parts and connections that establish how those parts fit and work together. An architecture may be conceptualized from a specific perspective focusing on an aspect or view of its subject. These architectural perspectives themselves can become components in a higher–level architecture serving to integrate and unify them into a higher level structure.

The architecture must define the rules, guidelines, or constraints for creating conformant implementations of the system. While this architecture does not specify the details on any implementation, it does establish guidelines that must be observed in making implementation choices. These conditions are particularly important for component architectures that embody extensibility features to allow additional capabilities to be added to previously specified parts. [2] This is the case where Data Management is the initial deployment activity followed by more complex system components.

By adopting a system architecture motivation as the basis for the IT Strategy, several benefits result:

  • Business processes are streamlined – a fundamental benefit to building enterprise information architecture is the discovery and elimination of redundancy in the business processes themselves. In effect, it can drive the reengineering the business processes it is designed to support. This occurs during the construction of the information architecture. By revealing the different organizational views of the same processes and data, any redundancies can be documented and dealt with. The fundamental approach to building the information architecture is to focus on data, process and their interaction.
  • Systems information complexity is reduced – the architectural framework reduces information system complexity by identifying and eliminating redundancy in data and software. The resulting enterprise information architecture will have significantly fewer applications and databases as well as a resulting reduction in intersystem links. This simplification also leads to significantly reduced costs. Some of those recovered costs can and should be reinvested into further information system improvements. This reinvestment activity becomes the raison d’état for the enterprise–wide system deployment.
  • Enterprise–wide integration is enabled through data sharing and consolidation – the information architecture identifies the points to deploy standards for shared data. For example, most Kimball business units hold a wealth of data about products, customers, and manufacturing processes. However, this information is locked within the confines of the business unit specific applications. The information architecture forces compatibility for shared enterprise data. This compatible information can be stripped out of operational systems, merged to provide an enterprise view, and stored in data repositories. In addition, data standards streamline the operational architecture by eliminating the need to translate or move data between systems. A well–designed architecture not only streamlines the internal information value chain, but it can provide the infrastructure necessary to link information value chains between business units or allow effortless substitution of information value chains.
  • Rapid evolution to new technologies is enabled – client / server and object–oriented technology revolves around the understanding of data and the processes that create and access this data. Since the enterprise information architecture is structured around data and process and not redundant organizational views of the same thing, the application of client / server and object–oriented technologies is much cleaner. Attempting to move to these new technologies without an enterprise information architecture will result in the eventual rework of the newly deployed system.

[1] A Timeless way of Building, C. Alexander, Oxford University Press, 1979.

[2] “How Architecture Wins Technology Wars,” C. Morris and C. Ferguson, Harvard Business Review, March–April 1993, pp. 86–96.

Categories: Project Management

Neo4j: LOAD CSV – The sneaky null character

Mark Needham - Sat, 10/18/2014 - 11:49

I spent some time earlier in the week trying to import a CSV file extracted from Hadoop into Neo4j using Cypher’s LOAD CSV command and initially struggled due to some rogue characters.

The CSV file looked like this:

$ cat foo.csv
foo,bar,baz
1,2,3

I wrote the following LOAD CSV query to extract some of the fields and compare others:

load csv with headers from "file:/Users/markneedham/Downloads/foo.csv" AS line
RETURN line.foo, line.bar, line.bar = "2"
==> +--------------------------------------+
==> | line.foo | line.bar | line.bar = "2" |
==> +--------------------------------------+
==> | <null>   | "2"     | false          |
==> +--------------------------------------+
==> 1 row

I had expect to see a “1” in the first column and a ‘true’ in the third column, neither of which happened.

I initially didn’t have a text editor with hexcode mode available so I tried checking the length of the entry in the ‘bar’ field:

load csv with headers from "file:/Users/markneedham/Downloads/foo.csv" AS line
RETURN line.foo, line.bar, line.bar = "2", length(line.bar)
==> +---------------------------------------------------------+
==> | line.foo | line.bar | line.bar = "2" | length(line.bar) |
==> +---------------------------------------------------------+
==> | <null>   | "2"     | false          | 2                |
==> +---------------------------------------------------------+
==> 1 row

The length of that value is 2 when we’d expect it to be 1 given it’s a single character.

I tried trimming the field to see if that made any difference…

load csv with headers from "file:/Users/markneedham/Downloads/foo.csv" AS line
RETURN line.foo, trim(line.bar), trim(line.bar) = "2", length(line.bar)
==> +---------------------------------------------------------------------+
==> | line.foo | trim(line.bar) | trim(line.bar) = "2" | length(line.bar) |
==> +---------------------------------------------------------------------+
==> | <null>   | "2"            | true                 | 2                |
==> +---------------------------------------------------------------------+
==> 1 row

…and it did! I thought there was probably a trailing whitespace character after the “2” which trim had removed and that ‘foo’ column in the header row had the same issue.

I was able to see that this was the case by extracting the JSON dump of the query via the Neo4j browser:

{  
   "table":{  
      "_response":{  
         "columns":[  
            "line"
         ],
         "data":[  
            {  
               "row":[  
                  {  
                     "foo\u0000":"1\u0000",
                     "bar":"2\u0000",
                     "baz":"3"
                  }
               ],
               "graph":{  
                  "nodes":[  
 
                  ],
                  "relationships":[  
 
                  ]
               }
            }
         ],
      ...
}

It turns out there were null characters scattered around the file so I needed to pre process the file to get rid of them:

$ tr < foo.csv -d '\000' > bar.csv

Now if we process bar.csv it’s a much smoother process:

load csv with headers from "file:/Users/markneedham/Downloads/bar.csv" AS line
RETURN line.foo, line.bar, line.bar = "2", length(line.bar)
==> +---------------------------------------------------------+
==> | line.foo | line.bar | line.bar = "2" | length(line.bar) |
==> +---------------------------------------------------------+
==> | "1"      | "2"      | true           | 1                |
==> +---------------------------------------------------------+
==> 1 row

Note to self: don’t expect data to be clean, inspect it first!

Categories: Programming

R: Linear models with the lm function, NA values and Collinearity

Mark Needham - Sat, 10/18/2014 - 07:35

In my continued playing around with R I’ve sometimes noticed ‘NA’ values in the linear regression models I created but hadn’t really thought about what that meant.

On the advice of Peter Huber I recently started working my way through Coursera’s Regression Models which has a whole slide explaining its meaning:

2014 10 17 06 21 07

So in this case ‘z’ doesn’t help us in predicting Fertility since it doesn’t give us any more information that we can’t already get from ‘Agriculture’ and ‘Education’.

Although in this case we know why ‘z’ doesn’t have a coefficient sometimes it may not be clear which other variable the NA one is highly correlated with.

Multicollinearity (also collinearity) is a statistical phenomenon in which two or more predictor variables in a multiple regression model are highly correlated, meaning that one can be linearly predicted from the others with a non-trivial degree of accuracy.

In that situation we can make use of the alias function to explain the collinearity as suggested in this StackOverflow post:

library(datasets); data(swiss); require(stats); require(graphics)
z <- swiss$Agriculture + swiss$Education
fit = lm(Fertility ~ . + z, data = swiss)
> alias(fit)
Model :
Fertility ~ Agriculture + Examination + Education + Catholic + 
    Infant.Mortality + z
 
Complete :
  (Intercept) Agriculture Examination Education Catholic Infant.Mortality
z 0           1           0           1         0        0

In this case we can see that ‘z’ is highly correlated with both Agriculture and Education which makes sense given its the sum of those two variables.

When we notice that there’s an NA coefficient in our model we can choose to exclude that variable and the model will still have the same coefficients as before:

> require(dplyr)
> summary(lm(Fertility ~ . + z, data = swiss))$coefficients
                   Estimate  Std. Error   t value     Pr(>|t|)
(Intercept)      66.9151817 10.70603759  6.250229 1.906051e-07
Agriculture      -0.1721140  0.07030392 -2.448142 1.872715e-02
Examination      -0.2580082  0.25387820 -1.016268 3.154617e-01
Education        -0.8709401  0.18302860 -4.758492 2.430605e-05
Catholic          0.1041153  0.03525785  2.952969 5.190079e-03
Infant.Mortality  1.0770481  0.38171965  2.821568 7.335715e-03
> summary(lm(Fertility ~ ., data = swiss))$coefficients
                   Estimate  Std. Error   t value     Pr(>|t|)
(Intercept)      66.9151817 10.70603759  6.250229 1.906051e-07
Agriculture      -0.1721140  0.07030392 -2.448142 1.872715e-02
Examination      -0.2580082  0.25387820 -1.016268 3.154617e-01
Education        -0.8709401  0.18302860 -4.758492 2.430605e-05
Catholic          0.1041153  0.03525785  2.952969 5.190079e-03
Infant.Mortality  1.0770481  0.38171965  2.821568 7.335715e-03

If we call alias now we won’t see any output:

> alias(lm(Fertility ~ ., data = swiss))
Model :
Fertility ~ Agriculture + Examination + Education + Catholic + 
    Infant.Mortality
Categories: Programming

Agile Risk Management: People

People are chaotic.

People are chaotic.

Have ever heard the saying, “software development would be easy if it weren’t for the people”? People are one of the factors that cause variability in the performance of projects and releases (other factors also include complexity, the size of the work, and process discipline.) There are three mechanisms built into most Agile frameworks to address an acceptance of the idea that people can be chaotic by nature and therefore dampen variability.

  1. Team size, constitution and consistency are attributes that most Agile frameworks have used to enhance productivity and effectiveness that also reduce the natural variability generated when people work together.
    1. The common Agile team size of 7 ± 2 is small enough that team members can establish and nurture personal relationships to ensure effective communication.
    2. Agile teams are typically cross-functional and include a Scrum master/coach and the product owner. The composition of the team fosters self-reliance and the ability to self-organize, again reducing variability.
    3. Long lived teams tend to establish strong bonds that foster good communication and behaviors such as swarming. Swarming is a behavior in which team members rally to a task that is in trouble so that team as a whole can meet its goal, which reduces overall variability in performance.
  2. Peer reviews of all types have been a standard tool to improve quality and consistency of work products for decades. Peer reviews are a mechanism to remove defects from code or other work product before they are integrated into larger work products. The problem is that having someone else look at something you created and criticize it is grating. Extreme programing took classic peer reviews a step further and put two people together at one keyboard, one typing and the other providing running commentary (a colloquial description of pair programing). Demonstrations are a variant of peer reviews. Removing defects earlier in the development process through observation and discussion reduces variability and therefore the risk of not delivering value.
  3. Daily stand ups and other rituals are the outward markers of Agile techniques. Iteration/sprint planning keeps teams focused on what they need to do in the short-term future and then re-plans when that time frame is over. Daily stand-ups provide a platform for the team to sync up on a daily basis to reduce the variance that can creep in when plans diverge. Demonstrations show project stakeholders how the team is solving their business problems and solicit feedback to keep the team on track. All of these rituals reduce potential variability that can be introduced by people acting alone rather than as a team with a common goal.

In information technology projects of all types, people transform ideas and concepts into business value. In software development and maintenance, the tools and techniques might vary but, at its core, software-centric projects are social enterprises. Get any group of people together to achieve a goal is a somewhat chaotic process. Agile techniques and frameworks have been structured to help individuals to increase alignment and to act together as a team to deliver business value.


Categories: Process Management

Zumero Professional Services

Eric.Weblog() - Eric Sink - Fri, 10/17/2014 - 19:00
Building mobile apps is really hard.
  • Why does my app work in the simulator but not on the device?
  • How do I get 60 frames-per-second UI performance on Android?
  • How do I manage concurrent access to my mobile database?
  • How do I deal with limited memory?
  • How do I keep something running in the background?
  • I have to compile for how many different CPU architectures?!?

Building apps for mobile devices involves a dizzying array of technologies and tooling. The explosion of mobile offers more ways for software projects to fail than ever before.

Zumero would like to help.

We can build your app for you. Or we can assist or advise as you build it yourself.

Both New and Old

Zumero for SQL Server is the best solution for syncing SQL data with mobile devices. Now we're looking ahead to providing more ways of helping developers build great business apps for mobile devices. We have additional products in the pipeline, but it is already clear that Zumero Professional Services is going to be an important piece of our story.

To some degree, we are formalizing what we are already doing for our customers. They're building apps to solve business problems, and they're constantly experiencing the reality that mobile app development is really, really hard. We often get involved in ways that go beyond the use of our product, helping with issues like concurrency and integration with native code.

So yes, Zumero Professional Services is a new thing for us. We see this as an opportunity because of the explosion of business apps being built. But it is also an old thing. We've done this before.

A number of years ago we did a lot of custom software development on mobile devices for companies like Motorola. This was back when smartphones weren't actually very smart. We built stuff that was burned into the phone's ROM, with CPU and memory constraints that make an iPhone look like a supercomputer. We had things like massive test suites with high code coverage.

We got out of that market, but now we're back, and even though the mobile industry has changed dramatically, many of the core technology challenges are still the same. This feels like a return to our roots.

Our place in the world Specialties Solid competencies Brussels sprouts (We are especially excited about stuff in this column.) (We can provide reliable execution and high-quality results using the technologies and practices in this column.) (We actively avoid stuff in this column.) Apps that change your business for the better Apps that solve business problems Games (unless we're playing them) Tough technical issues Apps that integrate with existing systems Photoshop Offline-capable apps with sync Networking, REST Data exchange via CDROM Xamarin Objective-C, Java, PhoneGap Adobe Flash Azure, Amazon Web Services Integration with on-prem backends Apps that don't let people interact iOS, Android, Windows Phone Windows, Mac OS BlackBerry, PalmOS SQLite, Microsoft SQL Server NoSQL FoxPro Helping companies use mobile apps to make their business better Building great apps that people like to use Guessing what the next viral app will be Interested?

Email me or sales@zumero.com to explore possibilities.