Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Re-Read Saturday: The Goal: A Process of Ongoing Improvement. Part 9  

IMG_1249

Part of the reason I embarked on Re-Read Saturdays was to refresh myself on a number of books that have had significant impact on my career. Re-grounding myself was somwhat of a selfish idea; however at the same time as I refreshing myself on important concepts Re-read Saturday has provided a platform to share those ideas with a wider audience. As we begin the second half of the re-read of The Goal, I have been struck how many people have been exposed to the ideas in The Goal and how many of those ideas they have put into action, even if they can’t recite the Theory of Constraints verbatim. For example, the development and test manager that recognizing the handoff from development to test as a bottleneck and worked out a priorization scheme that maximized throughput of critical projects. Earlier entries in this re-read are:

Part 1       Part 2       Part 3      Part 4      Part 5      Part 6      Part 7      Part 8

Chapter 23: Alex meets with Ted Spencer, the supervisor for the heat-treat area, who is asking Alex to get the “computer guy” (Ralph) off his back. Ralph is asking Ted to keep significantly better records of when parts enter and exit the heat-treat process. Ted indicates that he does not know why Ralph wants the data. In the past few chapters we have seen the power of transparency and communication to support process changes; in this scenario the lack of transparency has generated conflict. When Alex meets with Ralph he finds that has been trying to use the process data to understand why more shipments have not been being completed, and has noticed SIGNIFICANT variability in the times that parts enter and exit the heat-treat process. Often parts that have been completed sit until someone has time to unload the furnace. This originally lead him to question the validity of the data, so he requested that Ted to generate better data. The issue turns out to be that parts are often not immediately unloaded after the heat-treat process due to timing and staffing issues. The furnace is loaded and then heats the parts for four or five hours before being ready to be unloaded. In order to maximize efficiency, people are assigned other jobs during the heating process, which causes the timing and resource contention problems. The heat-treat bottleneck is not being run at or near maximum capacity, which reduces the output of the plant. As Ralph leaves he mentions that he believes they can predict when orders ship based on the bottlenecks (this is a bit of foreshadowing; however an area of further reading you might consider is our discussion of Little’s Law). Note that the problem with the heat-treat process was identified through measurement and analysis of the data. The problem Ralph identified is a reflection of the ripple effect of other changes and that as the process is refined better information is exposed.

Alex and Rob Donavon, the production supervisor, meet (loudly) over the discovery that the heat-treat process is not being used to maximum capacity. The discussion unearths that similar timing and resource problems are happening at the NCX-10, even though the new work rules ensure that no breaks are taken during machine set-up. The problem occurs when the machine stops and before the set-up begins again. They decide to staff both bottlenecks 24×7 so there is no downtime. The problem of who to staff the NCX-10 and heat-treat with immediately exposes a new set of constraints, this time as a reflection of the overall organization’s policies on pay and hiring. Hiring, including layoff callbacks, are currently frozen, therefore Alex and his team need to rob Peter to pay Paul. (A bit of foreshadowing – the impact of change can ripple through other process steps.)   In an overall sense, the efficiency of both the heat-treat and NCX-10 steps as measured in terms of cost per unit is being reduced while the overall effectiveness of the plant is being increased by the changes being made.

One of the other steps taken as part of the new changes to staff the bottlenecks is that Alex has let the foremen in the heat-treat area know that they will be rewarded for changes that improve the output of the process. The third-shift foreman makes two process changes that have a significant impact. He has broken high priority orders down and batched parts that require the same treatment together, and has prepped the material so that it is staged to be loaded as soon as the furnace is ready. He also points out to Alex that with a little help from engineering they can modify the loading process so that parts can be wheeled in and wheeled out rather than lifted in an out by a crane. The foreman is immediately shifted to first shift to work with engineering to make the changes and document the process. In Agile frameworks like Scrum this is EXACTLY why the teams doing the work need to reflect on how they are doing the work and take steps to improve their processes.

Chapter 24 begins on an up note! The plant has been able to ship more orders with a higher sales value than ever before, while reducing the level of work-in-progress. Champagne flows! During the celebration Bill Peach (Alex’s boss) calls and delivers praise from one the clients that has noticed that his orders are getting delivered. In light of the continuing celebration Alex is driven home by a female member of staff, which leads to complications with Julie, his estranged wife, who just happens to be waiting to surprise Alex. If Alex’s marriage problems were not enough, the next workday Alex discovers that like a virus, the bottlenecks have spread. Changes to the process to focus on making the parts that flow through the bottleneck more available have caused process steps for other parts to become bottlenecks.

Alex tracks down Johan and briefs him on what they have done. Johan suggests a visit to see what has been accomplished.

The Goal is truly about the Theory of Constraints; however Johan’s role is the same as an Agile coach. Johan rarely solves the plants problems directly, but rather asks the questions that lead to a solution. The Goal provides a great side benefit of reinforcing one the central ideas of Agile coaching.

Summary of The Goal so far:

Chapters 1 through 3 actively present the reader with a burning platform. The plant and division are failing. Alex Rogo has actively pursued increased efficiency and automation to generate cost reductions, however performance is falling even further behind and fear has become central feature in the corporate culture.

Chapters 4 through 6 shift the focus from steps in the process to the process as a whole. Chapters 4 – 6 move us down the path of identifying the ultimate goal of the organization (in this book). The goal is making money and embracing the big picture of systems thinking. In this section, the authors point out that we are often caught up with pursuing interim goals, such as quality, efficiency or even employment, to the exclusion of the of the ultimate goal. We are reminded by the burning platform identified in the first few pages of the book, the impending closure of the plant and perhaps the division, which in the long run an organization must make progress towards their ultimate goal, or they won’t exist.

Chapters 7 through 9 show Alex’s commitment to change, seeks more precise advice from Johan, brings his closest reports into the discussion and begins a dialog with his wife (remember this is a novel). In this section of the book the concept “that you get what you measure” is addressed. In this section of the book, we see measures of efficiency being used at the level of part production, but not at the level of whole orders or even sales. We discover the corollary to the adage ‘you get what you measure’ is that if you measure the wrong thing …you get the wrong thing. We begin to see Alex’s urgency and commitment to make a change.

Chapters 10 through 12 mark a turning point in the book. Alex has embraced a more systems view of the plant and that the measures that have been used to date are more focused on optimizing parts of the process to the detriment to overall goal of the plant.  What has not fallen into place is how to take that new knowledge and change how the plant works. The introduction of the concepts of dependent events and statistical variation begin the shift the conceptual understanding of what measure towards how the management team can actually use that information.

Chapters 13 through 16 drive home the point that dependent events and statistical variation impact the performance of the overall system. In order for the overall process to be more effective you have to understand the capability and capacity of each step and then take a systems view. These chapters establish the concepts of bottlenecks and constraints without directly naming them and that focusing on local optimums causes more trouble than benefit.

Chapters 17 through 18 introduces the concept of bottlenecked resources. The affect of the combination dependent events and statistical variability through bottlenecked resources makes delivery unpredictable and substantially more costly. The variability in flow through the process exposes bottlenecks that limit our ability to catch up, making projects and products late or worse generating technical debt when corners are cut in order to make the date or budget.

Chapters 19 through 20 begins with Johan coaching Alex’s team to help them to identify a pallet of possible solutions. They discover that every time the capacity of a bottleneck is increased more product can be shipped.  Changing the capacity of a bottleneck includes reducing down time and the amount of waste the process generates. The impact of a bottleneck is not the cost of individual part, but the cost of the whole product that cannot be shipped. Instead of waiting to make all of the changes Alex and his team implement changes incrementally rather than waiting until they can deliver all of the changes.

Chapters 21 through 22 are a short primer on change management. Just telling people to do something different does not generate support. Significant change requires transparency, communication and involvement. One of Deming’s 14 Principles is constancy of purpose. Alex and his team engage the workforce though a wide range of communication tools and while staying focused on implementing the changes needed to stay in business.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast. Dead Tree Version or Kindle Version

 


Categories: Process Management

R: Removing for loops

Mark Needham - Sun, 04/19/2015 - 00:53

In my last blog post I showed the translation of a likelihood function from Think Bayes into R and in my first attempt at this function I used a couple of nested for loops.

likelihoods = function(names, mixes, observations) {
  scores = rep(1, length(names))
  names(scores) = names
 
  for(name in names) {
      for(observation in observations) {
        scores[name] = scores[name] *  mixes[[name]][observation]      
      }
    }  
  return(scores)
}
Names = c("Bowl 1", "Bowl 2")
 
bowl1Mix = c(0.75, 0.25)
names(bowl1Mix) = c("vanilla", "chocolate")
bowl2Mix = c(0.5, 0.5)
names(bowl2Mix) = c("vanilla", "chocolate")
Mixes = list("Bowl 1" = bowl1Mix, "Bowl 2" = bowl2Mix)
Mixes
 
Observations = c("vanilla", "vanilla", "vanilla", "chocolate")
l = likelihoods(Names, Mixes, Observations)
 
> l / sum(l)
  Bowl 1   Bowl 2 
0.627907 0.372093

We pass in a vector of bowls, a nested dictionary describing the mixes of cookies in each bowl and the observations that we’ve made. The function tells us that there’s an almost 2/3 probability of the cookies coming from Bowl 1 and just over 1/3 of being Bowl 2.

In this case there probably won’t be much of a performance improvement by getting rid of the loops but we should be able to write something that’s more concise and hopefully idiomatic.

Let’s start by getting rid of the inner for loop. That can be replace by a call to the Reduce function like so:

likelihoods2 = function(names, mixes, observations) {
  scores = rep(0, length(names))
  names(scores) = names
 
  for(name in names) {
    scores[name] = Reduce(function(acc, observation) acc *  mixes[[name]][observation], Observations, 1)
  }  
  return(scores)
}
l2 = likelihoods2(Names, Mixes, Observations)
 
> l2 / sum(l2)
  Bowl 1   Bowl 2 
0.627907 0.372093

So that’s good, we’ve still got the same probabilities as before. Now to get rid of the outer for loop. The Map function helps us out here:

likelihoods3 = function(names, mixes, observations) {
  scores = rep(0, length(names))
  names(scores) = names
 
  scores = Map(function(name) 
    Reduce(function(acc, observation) acc *  mixes[[name]][observation], Observations, 1), 
    names)
 
  return(scores)
}
 
l3 = likelihoods3(Names, Mixes, Observations)
> l3
$`Bowl 1`
  vanilla 
0.1054688 
 
$`Bowl 2`
vanilla 
 0.0625

We end up with a list instead of a vector which we need to fix by using the unlist function:

likelihoods3 = function(names, mixes, observations) {
  scores = rep(0, length(names))
  names(scores) = names
 
  scores = Map(function(name) 
    Reduce(function(acc, observation) acc *  mixes[[name]][observation], Observations, 1), 
    names)
 
  return(unlist(scores))
}
 
l3 = likelihoods3(Names, Mixes, Observations)
 
> l3 / sum(l3)
Bowl 1.vanilla Bowl 2.vanilla 
      0.627907       0.372093

Now we just have this annoying ‘vanilla’ in the name. That’s fixed easily enough:

likelihoods3 = function(names, mixes, observations) {
  scores = rep(0, length(names))
  names(scores) = names
 
  scores = Map(function(name) 
    Reduce(function(acc, observation) acc *  mixes[[name]][observation], Observations, 1), 
    names)
 
  result = unlist(scores)
  names(result) = names
 
  return(result)
}
 
l3 = likelihoods3(Names, Mixes, Observations)
 
> l3 / sum(l3)
  Bowl 1   Bowl 2 
0.627907 0.372093

A slightly cleaner alternative makes use of the sapply function:

likelihoods3 = function(names, mixes, observations) {
  scores = rep(0, length(names))
  names(scores) = names
 
  scores = sapply(names, function(name) 
    Reduce(function(acc, observation) acc *  mixes[[name]][observation], Observations, 1))
  names(scores) = names
 
  return(scores)
}
 
l3 = likelihoods3(Names, Mixes, Observations)
 
> l3 / sum(l3)
  Bowl 1   Bowl 2 
0.627907 0.372093

That’s the best I’ve got for now but I wonder if we could write a version of this using matrix operations some how – but that’s for next time!

Categories: Programming

F# Exchange 2015

Phil Trelford's Array - Sat, 04/18/2015 - 12:47

This Friday saw the first ever F# eXchange, a one-day 2 track conference dedicated to all things F#, hosted at Skills Matter in London and attracting developers from across Europe.

There was a strong focus on open source projects throughout the day including MBrace (data scripting for the cloud), Fake (a DSL for build tasks), Paket (a dependency manager for .Net), the F# Power Tools and FunScript (an F# to JS compiler). In fact all the presenters used the open source project FsReveal to generate their slides!

Keynote

Tomas Petricek opened proceedings with a keynote on The Big F# and Open Source Love Story:

Slow development

One of Tomas’s observations, on slow development for open source projects resonated with many, where successful projects often start as just a simple script that fulfils a specific need and slowly gather momentum over time.

As an example, in Steffen Forkmann’s presentation he talked about how Fake had started as a simple F# script and over the years seen more and more contributors and downloads, with the addition of high quality documentation having a huge impact:

FAKE in numbers: 4278 commits, 137 contributors, 157252 downloads! Amazing work by @sforkmann & contributors at #fsharpx

— Tomas Petricek (@tomaspetricek) April 17, 2015

Talks

All the talks were recorded, and all the videos are already online!

Speakers

The videos:

Steffen also took advantage of his talk to make a special announcement about Paket:

Yay. It's done. I just released @PaketManager 1.0 live on stage at #fsharpex - http://t.co/STtMAzh3BT

— Steffen Forkmann (@sforkmann) April 17, 2015

Panel

The day ended with some pizza, drinks and a panel organized by prolific F# contributor, Don Syme:

Panel Discussion

Each panel member pitched why they thought F# was good in their core domain area from cloud, games, design, data science, scripting through to web.

There were some interesting discussions, and some mentions of the recent fsharpWorks led F# Survey.

2016

The date for next year’s F# eXchange 2016, the 16th April, is already in the calendar, hope to see you there, and please take advantage of the early bird ticket offer, only 85GBP up until the 16th June!

Categories: Programming

Life Quotes That Will Change Your Life

Life’s better with the right words.

And life quotes can help us live better.

Life quotes are a simple way to share some of the deepest insights on the art of living, and how to live well.

While some people might look for wisdom in a bottle, or in a book, or in a guru at the top of a mountain, surprisingly, a lot of the best wisdom still exists as quotes.

The problem is they are splattered all over the Web.

The Ultimate Life Quotes Collection

My ultimate Life Quotes collection is an attempt to put the best quotes right at your fingertips.

I wanted this life quotes collection to answer everything from “What is the meaning of life?” to “How do you live the good life?” 

I also wanted this life quotes collection to dive deep into all angles of life including dealing with challenges, living with regrets, how to find your purpose, how to live with more joy, and ultimately, how to live a little better each day.

The World’s Greatest Philosophers at Your Fingertips

Did I accomplish all that?

I’m not sure.  But I gave it the old college try.

I curated quotes on life from an amazing set of people including Dr. Seuss, Tony Robbins, Gandhi, Ralph Waldo Emerson, James Dean, George Bernard Shaw, Virginia Woolf, Buddha, Lao Tzu, Lewis Carroll, Mark Twain, Confucius, Jonathan Swift, Henry David Thoreau, and more.

Yeah, it’s a pretty serious collection of life quotes.

Don’t Die with Your Music Still In You

There are many messages and big ideas among the collection of life quotes.  But perhaps one of the most important messages is from the late, great Henry David Thoreau:

“Most men lead lives of quiet desperation and go to the grave with the song still in them.” 

And, I don’t think he meant play more Guitar Hero.

If you’re waiting for your chance to rise and shine, chances come to those who take them.

Not Being Dead is Not the Same as Being Alive

E.E. Cummings reminds us that there is more to living than simply existing:

“Unbeing dead isn’t being alive.” 

And the trick is to add more life to your years, rather than just add more years to your life.

Define Yourself

Life quotes teach us that living live on your terms starts by defining yourself.  Here are big, bold words from Harvey Fierstein that remind us of just that:

“Never be bullied into silence. Never allow yourself to be made a victim. Accept no one’s definition of your life; define yourself.”

Now is a great time to re-imagine all that you’re capable of.

We Regret the Things We Didn’t Do

It’s not usually the things that we do that we regret.  It’s the things we didn’t do:

“Of all sad words of tongue or pen, the saddest are these, ‘It might have been.”  – John Greenleaf Whittier

Have you answered to your calling?

Leave the World a Better Place

One sure-fire way that many people find their path is they aim to leave the world at least a little better than they found it.

“To laugh often and much; to win the respect of intelligent people and the affection of children…to leave the world a better place…to know even one life has breathed easier because you have lived. This is to have succeeded.” -- Ralph Waldo Emerson

It’s a reminder that we can measure our life by the lives of the people we touch.

You Might Also Like

7 Habits of Highly Motivated People

10 Leadership Ideas to Improve Your Influence and Impact

Boost Your Confidence with the Right Words

The Great Inspirational Quotes Revamped

The Great Leadership Quotes Collection Revamped

Categories: Architecture, Programming

Approximating for Improved Understanding

Herding Cats - Glen Alleman - Fri, 04/17/2015 - 23:44

Screen Shot 2015-04-09 at 9.35.45 AMThe world of projects, project management, and the products or services produced by those projects is uncertain. It's never certain. Seeking certainty is not only naive, it's simply not possible.

Making decisions in the presence of this uncertainty is part of our job as project managers, engineers, developers on behave of those paying for our work.

It's also the job of the business, whose money is being spent on the projects to produce tangible value in exchange for that money.

From the introduction of the book to the left...

Science and engineering, our modern ways of understanding and altering the world, are said to be about accuracy and precision. Yet we best master the complexity of our world by cultivating insight rather than precision. We need insight because our minds are but a small part of the world. An insight unifies fragments of knowledge into a compact picture that fits in our minds. But precision can overflow our mental registers, washing away the understanding brought by insight. This book shows you how to build insight and understanding first, so that you do not drown in complexity.

So what does this mean for our project world?

  • The future is uncertain. It is alway uncertain. It can't be anything but uncertain. Assuming certainty, is a waste of time. Managing in the presence of uncertanity is unavoidable. To do this we must estimate. This is unavoidable. To suggest otherwise willfully ignore the basis of all management practices
  • This uncertainty creates risk to our project. Cost, schedule, and risks to the delivered capabilities of the project or product development. To manage in with closed loop process, estimates are needed. This is unavoidable as well.
  • Uncertainty is either reducible or irreducible
    • Reducible uncertainty can be reduced with new information. We can buy down this uncertainty.
    • Irreducible uncertainty - the natural variations in what we do - can only be handled with margin.

In both these conditions we need to get organized in order to address the  underlying uncertainties. We need to put structure in place in some manner. Decomposing the work is a common way in the project domain. From a Work Breakdown Structure to simple sticky notes on the wall, breaking problems down into smaller parts is a known successful way to address a problem. 

With this decomposition, now comes the hard part. Making decisions in the presence of this uncertainty.

Probabilistic Reasoning

Reasoning about things that are uncertain is done with probability and statistics. Probability is a degree of belief

I believe we have a 80% probability of completing on or before the due date for the migration of SQL Server 2008 to SQL Server 2012.

Why do we have this belief?  Is it based on our knowledge from past experience. Is this knowledge sufficient to establish that 80% confidence?

  • Do we have some external model of the work effort needed to perform the task?
  • Is there a parametric model of similar work that can be applied to this work?
  • Could we decompose the work to smaller chunks that could then be used to model the larger set of tasks?
  • Could I conduct some experiments to improve my knowledge?
  • Could I build a model from intuition that could be used to test the limits of my confidence?

The answers to each of these informs our belief. 

Chaos, Complexity, Complex, Structured?

A well known agile thought leader made a statement today

I support total chaos in every domain

This is unlikely going to result in sound business decisions in the presence of uncertainty. Although there may be domains where chaos might produce usable results, when some degree of confidence that the money being spent will produce the needed capabilities, on of before the need date, at of below the budget needed to be profitable, and with the collection of all the needed capability to accomplish the mission or meet the business case, we're going to need to know how to manage our work to achieve those outcomes.

So let's assume - with a high degree of confidence - that we need to manage in the presence of uncertainty, but we have little interest in encouraging chaos, here's one approach.

So In The End

Since all the worlds a set of statistical processes, producing probabilistic outcome, which in turn create risk to any expected results is not addressed properly - the notion that decisions can be made in the presence of this condition can only be explained by the willful ignore of the basic facts of the physic of project work

  Related articles The Difference Between Accuracy and Precision Making Decisions in the Presence of Uncertainty Managing in Presence of Uncertainty Herding Cats: Risk Management is How Adults Manage Projects Herding Cats: Decision Analysis and Software Project Management Five Estimating Pathologies and Their Corrective Actions
Categories: Project Management

No.

NOOP.NL - Jurgen Appelo - Fri, 04/17/2015 - 23:01
Say No.

Last week, I had a nice Skype call with a reader who was seeking my advice on becoming an author and speaker, and I gave him some pointers. I normally don’t schedule calls with random people asking for a favor, but this time I made an exception. I had a good reason.

The post No. appeared first on NOOP.NL.

Categories: Project Management

Australia - July/August 2015

Coding the Architecture - Simon Brown - Fri, 04/17/2015 - 17:42

It's booked! Following on from my trip to Australia and the YOW! 2014 conference in December last year, I'll be back in Australia during July and August. The rough plan is to arrive in Perth and head east; visiting at least Melbourne, Brisbane and Sydney again. I'm hoping to schedule some user group talks and, although there probably won't be any public workshops, I'll be running a limited number of in-house 1-day workshops and/or talks along the way too.

If you're interested in me visiting your office/company during my trip, please just drop me a note at simon.brown@codingthearchitecture.com. Thanks!

Categories: Architecture

Stuff The Internet Says On Scalability For April 17th, 2015

Hey, it's HighScalability time:

A fine tribute on Silicon Valley & hilarious formula evaluating Peter Gregory's positive impact on humanity.

  • 118/196: nations becoming democracies since mid19th century; $70K: nice minimum wage; 70 million: monthly StackExchange visitors; 1 billion: drone planted trees; 1,000 Years: longest-exposure camera shot ever

  • Quotable Quotes:

    • @DrQz: #Performance modeling is really about spreading the guilt around.

    • @natpryce: “What do we want?” “More levels of indirection!” “When do we want it?” “Ask my IDateTimeFactoryImplBeanSingletonProxy!”

    • @BenedictEvans: In the late 90s we were euphoric about what was possible, but half what we had sucked. Now everything's amazing, but we worry about bubbles

    • Calvin Zito on Twitter: "DreamWorks Animation: One movie, 250 TB to make.10 movies in production at one time, 500 million files per movie. Wow."

    • Twitter: Some of our biggest MySQL clusters are over a thousand servers.

    • @SaraJChipps: It's 2015: open source your shit. No one wants to steal your stupid CRUD app. We just want to learn what works and what doesn't.

    • Calvin French-Owen: And as always: peace, love, ops, analytics.

    • @Wikipedia: Cut page load by 100ms and you save Wikipedia readers 617 years of wait annually. Apply as Web Performance Engineer

    • @IBMWatson: A person can generate more than 1 million gigabytes of health-related data.

    • @allspaw: "We’ve learned that automation does not eliminate errors." (yes!)  

    • @Obdurodon: Immutable data structures solve everything, in any environment where things like memory allocators and cache misses cost nothing.

    • KaiserPro: Pixar is still battling with lots of legacy cruft. They went through a phase of hiring the best and brightest directly from MIT and the like.

    • @Obdurodon: Immutable data structures solve everything, in any environment where things like memory allocators and cache misses cost nothing.

    • @abt_programming: "Duplication is far cheaper than the wrong abstraction" - @sandimetz

    • @kellabyte: When I see places running 1,200 containers for fairly small systems I want to scream "WHY?!"

    • chetanahuja: One of the engineers tried running our server stack on a raspberry for a laugh.. I was gobsmacked to hear that the whole thing just worked (it's a custom networking protocol stack running in userspace) if just a bit slower than usual.

  • Chances are if something can be done with your data, it will be done. @RMac18: Snapchat is using geofilters specific to Uber's headquarter to poach engineers.

  • Why (most) High Level Languages are Slow. Exactly this by masterbuzzsaw: If manual memory management is cancer, what is manual file management, manual database connectivity, manual texture management, etc.? C# may have “saved” the world from the “horrors” of memory management, but it introduced null reference landmines and took away our beautiful deterministic C++ destructors.

  • Why NFS instead of S3/EBS? nuclearqtip with a great answer: Stateful; Mountable AND shareable; Actual directories; On-the-wire operations (I don't have to download the entire file to start reading it, and I don't have to do anything special on the client side to support this; Shared unix permission model; Tolerant of network failures Locking!; Better caching ; Big files without the hassle.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Is LeSS meer dan SAFe?

Xebia Blog - Fri, 04/17/2015 - 14:48

(Grote) Nederlandse bedrijven die op zoek zijn naar een oplossing om de voordelen die hun Agile teams brengen op te schalen, gebruiken vooral het Scaled Agile Framework (SAFe) als referentiemodel. Dit model is -ook voor managers- zeer toegankelijke opgezet en trainingen en gecertificeerde consultants zijn beschikbaar. Al in 2009 beschreven Craig Larman en Bas Vodde hun ervaringen met de toepassing van Scrum in grote organisaties (onder andere Nokia) in hun boeken 'Scaling Lean & Agile Development' en 'Practices for Scaling Lean & Agile Development'. De methode noemden ze Large Scale Scrum, afgekort LeSS.
LeSS heeft de afgelopen jaren een onopvallend bestaan geleid. Onlangs is besloten dit waardevolle gedachtengoed meer in de spotlights te zetten. Er komt in de zomer een derde boek, de site less.works is gelanceerd, er is een trainingen tournee gestart en Craig en Bas geven acte de presence op de toonaangevende conferenties. Zo zal Bas 4 juni als keynote optreden tijdens Xebicon 2015, in Amsterdam. Is LeSS meer of minder dan SAFe? Of min of meer SAFe?

Wat is LeSS?
Less is dus een methode om een grote(re) organisatie met Agile teams als basis in te richten. Zoals de naam verraadt, Scrum is daarbij het uitgangspunt. Er zijn 2 smaken: ‘gewoon’ LeSS, tot 8 teams en Less Huge, vanaf 8 teams. LeSS bouwt op verplichte regels (rules), bijvoorbeeld "An Overall Retrospective is held after the Team Retrospectives to discuss cross-team and system-wide issues, and create improvement experiments. This is attended by Product Owner, ScrumMasters, Team Representatives, and managers (if there are any).” Daarnaast kent LeSS principles (ontwerp criteria). De principes vormen het referentie raamwerk op basis waarvan je de juiste ontwerp besluiten neemt. Tenslotte zijn er de Guidelines en Experiments, de dingen die in de praktijk bij organisaties succesvol of juist niet zijn gebleken. LeSS gaat naast het basis framework verder dieper in op:

  • Structure (de organisatie structuur)
  • Management (de -veranderende- rol van management)
  • Technical Excellence (sterk gebaseerd op XP en Continuous Delivery)
  • Adoption (de transformatie naar de LeSS organisatie).

LeSS in een notendop
De basis van LeSS is dat Large Scale Scrum = Scrum! Net als SAFe wordt in LeSS gezocht naar hoe Scrum toegepast kan worden op een groep van zeg 100 man. LeSS blijft het dichtst bij Scrum: er is 1 sprint, met 1 Product Owner, 1 product backlog, 1 planning en 1 sprint review, waarin 1 product wordt gerealiseerd. Dit is dus anders dan in SAFe, waarin een opgeblazen sprint is gedefinieerd (de Product Increment). Om deze 1 sprint implementatie te kunnen waarmaken is naast een hele sterke whole product focus, bijvoorbeeld ook een technisch platform nodig, dat dit ondersteunt. Waar SAFe pragmatisch een geleidelijke invoering van Agile at Scale toestaat, is LeSS strenger in de klaar-voor-de-start eisen. Er moet een structuur worden neergezet die de cultuur van de 'contract game’ doorbreekt. De cultuur van overvragen, druk, onduidelijkheid, verrassingen, en afrekenende verantwoordelijkheid.

LeSS is meer en minder SAFe
De recente inspanning om LeSS toegankelijk te maken gaan ongetwijfeld leiden tot een sterk toenemende aandacht voor deze aansprekende benadering voor de inrichting van Agile at Scale. LeSS is anders dan SAFe, al hebben beide modellen vooral in hun inspiratiebronnen ook veel gemeen.
Beide modellen kiezen een andere insteek, bijvoorbeeld mbt:

  • hoe Scrum toe te passen op een cluster van teams
  • de benadering van de transformatie naar Agile at Scale
  • hoe oplossingen worden gebracht: SAFe geeft de oplossing, LeSS de voors en tegens van keuzes

Opvallend is verder dat SAFe (met het portfolioniveau) uitlegt hoe de verbinding tussen strategie en backlogs gelegd moet worden. LeSS besteedt daarentegen meer aandacht aan de transformatie (Adoption) en Agile op hele grote schaal (LeSS Huge).

Of een organisatie kiest voor LeSS of SAFe, zal afhangen wat het best past bij de organisatie. Past bij de veranderambitie en bij de ‘agility’ op moment van starten. Sterk ‘blauwe’ organisaties zullen kiezen voor SAFe, organisaties die een overtuigende stap richting een Agile organisatie durven te zetten, zullen eerder kiezen voor LeSS. In beide gevallen loont het om kennis te nemen van de oplossingen die de andere methode biedt.

For the Upcoming Graduates

Herding Cats - Glen Alleman - Fri, 04/17/2015 - 05:36

No matter your life experience, your view of the world, whether you've had military experience or not, this is - or should be - an inspirational commencement speech. 

 

Categories: Project Management

#NoEstimates

2751403120_ae8be53206_b

#NoEstimates is a lightening rod.

 

Estimation is one of the lightening rod issues in software development and maintenance. Over the past few years the concept of #NoEstimates has emerged and has become a movement within the Agile community. Due to its newness, #NoEstimates has several camps revolving around a central concept. This essay begins the process of identifying and defining a core set of concepts in order to have a measured discussion.  A  shared language accross the gamut of estimating ideas whether Agile or not including #NoEstimates is critical for comparing both sets of concepts.  We begin our exploration of the ideas around the #NoEstimates concepts by establishing a context that includes classic estimatimation and #NoEstimates.

Classic Estimation Context: Estimation as a topic is often a synthesis of three related, but different concepts. The three concepts are budgeting, estimation and planning. Becasue these three conccepts are often conflated it is important to understand tthe relationship between the three.  These are typical in a normal commercial organization, however these concepts might be called different things depending your business model. An estimate is a finite approximation of cost, effort and/or duration based on some basis of knowledge (this is known as a basis of estimation). The flow of activity conflated as estimation often runs from budget, to project estimation to planning. In most organizations, the act of generating a finite approximation typically begins as a form of portfolio management in order to generate a budget for a department or group. The budgeting process helps make decisions about which pieces of work are to be done. Most organizations have a portfolio of work that is larger than they can accomplish, therefore they need a mechanism to prioritize. Most portfolio managers, whether proponents of an Agile or a classic approach, would defend using value as a key determinant of prioritization. Value requires having some type of forecast of cost and benefit of the project over some timeframe. Once a project enters a pipeline in a classic organization. an estimate is typically generated.  The estimate is generally believed to be more accurate than the orgianal  budget due to the information gathered as the project is groomed to begin. Plans breakdown stories into tasks often with personal assigned, an estimate of effort generated at the task level and sum the estimates into higher-level estimates. Any of these steps can (but should not) be called estimation. The three -level process described above, if misused, can cause several team and organizational issues. Proponents of the #NoEstimates movement often classify these issues as estimation pathologies; we will explore these “pathologies” in later essays.

#NoEstimates Context:  There are two camps macro camps in the #NoEstimate movement (the two camps probably reflect more of continuum of ideas rathe than absolutes).  The first camp argues that a team should break work down into small chunks and then immediately begin completing those small chunks (doing the highest value first). The chunks would build up quickly to a minimum viable product (MVP) that can generate feedback, so the team can hone its ability to deliver value. I call this camp the “Feedback’ers”, and luminaries like Woody Zuill often champion this camp. A second camp begins in a similar manner – by breaking the work into small pieces, prioritizing on value (and perhaps risk), delivering against a MVP to generate feedback – but they measure throughput. Throughput is a measure of how many units of work (e.g. stories or widgets) a team can deliver in a specific period of time. Continuously measuring the throughput of the team provides a tool to understand when work needs to start in order for to be delivered within a period time. Average throughput is used to provide the team and other stakeholders with a forecast of the future. This is very similar to throughput measured used in Kanban. People like Vasco Duarte (listen to my interview with Vasco) champion the second camp, which I tend to call the “Kanban’ers”.  I recently heard David Anderson, the Kanban visionary, discuss a similar #NoEstimates position using throughput as a forecasting tool. Both camps in the #NoEstimates movement eschew developing story- or task-level estimates. The major difference is on the use of throughput to provide forecasting which is central to bottom-up estimating and planning at the lowest level of the classic estimation continuum.

When done correctly, both #NoEstimates and classic estimation are tools to generate feedback and create guidance for the organization. In its purest form #NoEstimates uses functionality to generate feedback and to provide guidance about what is possible. The less absolutist “Kanban’er” form of #NoEstimates uses both functional software and throughput measures as feedback and guidance tools. Classic estimation tools use plans and performance to the plan to generate feedback and guidance. The goal is usually the same, it is just that the mechanisms are very different. With an established context and vocabulary exlore the concepts more deeply.


Categories: Process Management

R: Think Bayes – More posterior probability calculations

Mark Needham - Thu, 04/16/2015 - 21:57

As I mentioned in a post last week I’ve been reading through Think Bayes and translating some of the examples form Python to R.

After my first post Antonios suggested a more idiomatic way of writing the function in R so I thought I’d give it a try to calculate the probability that combinations of cookies had come from each bowl.

In the simplest case we have this function which takes in the names of the bowls and the likelihood scores:

f = function(names,likelihoods) {
  # Assume each option has an equal prior
  priors = rep(1, length(names)) / length(names)
 
  # create a data frame with all info you have
  dt = data.frame(names,priors,likelihoods)
 
  # calculate posterior probabilities
  dt$post = dt$priors*dt$likelihoods / sum(dt$priors*dt$likelihoods)
 
  # specify what you want the function to return
  list(names=dt$names, priors=dt$priors, likelihoods=dt$likelihoods, posteriors=dt$post)  
}

We assume a prior probability of 0.5 for each bowl.

Given the following probabilities of of different cookies being in each bowl…

mixes = {
  'Bowl 1':dict(vanilla=0.75, chocolate=0.25),
  'Bowl 2':dict(vanilla=0.5, chocolate=0.5),
}

…we can simulate taking one vanilla cookie with the following parameters:

Likelihoods = c(0.75,0.5)
Names = c("Bowl 1", "Bowl 2")
res=f(Names,Likelihoods)
 
> res$posteriors[res$name == "Bowl 1"]
[1] 0.6
> res$posteriors[res$name == "Bowl 2"]
[1] 0.4

If we want to simulate taking 3 vanilla cookies and 1 chocolate one we’d have the following:

Likelihoods = c((0.75 ** 3) * (0.25 ** 1), (0.5 ** 3) * (0.5 ** 1))
Names = c("Bowl 1", "Bowl 2")
res=f(Names,Likelihoods)
 
> res$posteriors[res$name == "Bowl 1"]
[1] 0.627907
> res$posteriors[res$name == "Bowl 2"]
[1] 0.372093

That’s a bit clunky and the intent of ‘3 vanilla cookies and 1 chocolate’ has been lost. I decided to refactor the code to take in a vector of cookies and calculate the likelihoods internally.

First we need to create a data structure to store the mixes of cookies in each bowl that we defined above. It turns out we can do this using a nested list:

bowl1Mix = c(0.75, 0.25)
names(bowl1Mix) = c("vanilla", "chocolate")
bowl2Mix = c(0.5, 0.5)
names(bowl2Mix) = c("vanilla", "chocolate")
Mixes = list("Bowl 1" = bowl1Mix, "Bowl 2" = bowl2Mix)
 
> Mixes
$`Bowl 1`
  vanilla chocolate 
     0.75      0.25 
 
$`Bowl 2`
  vanilla chocolate 
      0.5       0.5

Now let’s tweak our function to take in observations rather than likelihoods and then calculate those likelihoods internally:

likelihoods = function(names, observations) {
  scores = c(1,1)
  names(scores) = names
 
  for(name in names) {
      for(observation in observations) {
        scores[name] = scores[name] *  mixes[[name]][observation]      
      }
    }  
  return(scores)
}
 
f = function(names,mixes,observations) {
  # Assume each option has an equal prior
  priors = rep(1, length(names)) / length(names)
 
  # create a data frame with all info you have
  dt = data.frame(names,priors)
 
  dt$likelihoods = likelihoods(Names, Observations)
 
  # calculate posterior probabilities
  dt$post = dt$priors*dt$likelihoods / sum(dt$priors*dt$likelihoods)
 
  # specify what you want the function to return
  list(names=dt$names, priors=dt$priors, likelihoods=dt$likelihoods, posteriors=dt$post)  
}

And if we call that function:

Names = c("Bowl 1", "Bowl 2")
 
bowl1Mix = c(0.75, 0.25)
names(bowl1Mix) = c("vanilla", "chocolate")
bowl2Mix = c(0.5, 0.5)
names(bowl2Mix) = c("vanilla", "chocolate")
Mixes = list("Bowl 1" = bowl1Mix, "Bowl 2" = bowl2Mix)
Mixes
 
Observations = c("vanilla", "vanilla", "vanilla", "chocolate")
 
res=f(Names,Mixes,Observations)
 
> res$posteriors[res$names == "Bowl 1"]
[1] 0.627907
 
> res$posteriors[res$names == "Bowl 2"]
[1] 0.372093

Exactly the same result as before! #win

Categories: Programming

Works with Google Cardboard: creativity plus compatibility

Google Code Blog - Thu, 04/16/2015 - 18:16

Posted by Andrew Nartker, Product Manager, Google Cardboard

All of us is greater than any single one of us. That’s why we open sourced the Cardboard viewer design on day one. And why we’ve been working on virtual reality (VR) tools for manufacturers and developers ever since. We want to make VR better together, and the community continues to inspire us.

For example: what began with cardboard, velcro and some lenses has become a part of toy fairs and art shows and film festivals all over the world. There are also hundreds of Cardboard apps on Google Play, including test drives, roller coaster rides, and mountain climbs. And people keep finding new ways to bring VR into their daily lives—from campus tours to marriage proposals to vacation planning.

It’s what we dreamed about when we folded our first piece of cardboard, and combined it with a smartphone: a VR experience for everyone! And less than a year later, there’s a tremendous diversity of VR viewers and apps to choose from. To keep this creativity going, however, we also need to invest in compatibility. That’s why we’re announcing a new program called Works with Google Cardboard.

At its core, the program enables any Cardboard viewer to work well with any Cardboard app. And the result is more awesome VR for all of us.

For makers: compatibility tools, and a certification badge

These days you can find Cardboard viewers made from all sorts of materials—plastic, wood, metal, even pizza boxes. The challenge is that each viewer may have slightly different optics and dimensions, and apps actually need this info to deliver a great experience. That’s why, as part of today’s program, we’re releasing a new tool that configures any viewer for every Cardboard app, automatically.

As a manufacturer, all you need to do is define your viewer’s key parameters (like focal length, input type, and inter-lens distance), and you’ll get a QR code to place on your device. Once a user scans this code using the Google Cardboard app, all their other Cardboard VR experiences will be optimized for your viewer. And that’s it.

Starting today, manufacturers can also apply for a program certification badge. This way potential users will know, at a glance, that a VR viewer works great with Cardboard apps and games. Visit the Cardboard website to get started.

The GoggleTech C1-Glass viewer works with Google Cardboard For developers: design guidelines and SDK updates

Whether you’re building your first VR app, or you’ve done it ten times before, creating an immersive experience comes with a unique set of design questions like, “How should I orient users at startup?” Or “How do menus even work in VR?”

We’ve explored these questions (and many more) since launch, and today we’re sharing our initial learnings with the developer community. Our new design guidelines focus on overall usability, as well as common VR pitfalls, so take a look and let us know your thoughts.

Of course, we want to make it easier to design and build great apps. So today we're also updating the Cardboard SDKs for Android and Unity—including improved head tracking and drift correction. In addition, both SDKs support the Works with Google Cardboard program, so all your apps will play nice with all certified VR viewers.

For users: apps + viewers = choices

The number of Cardboard apps has quickly grown from dozens to hundreds, so we’re expanding our Google Play collection to help you find high-quality apps even faster. New categories include Music and Video, Games, and Experiences. Whether you’re blasting asteroids, or reliving the Saturday Night Live 40th Anniversary Special, there’s plenty to explore on Google Play.

New collections of Cardboard apps on Google Play

Today’s Works with Google Cardboard announcement means you’ll get the same great VR experience across a wide selection of Cardboard viewers. Find the viewer that fits you best, and then fire up your favorite apps.

For the future: Thrive Audio and Tilt Brush are joining the Google family

Most of today’s VR experiences focus on what you see, but what you hear is just as important. That’s why we’re excited to welcome the Thrive Audio team from the School of Engineering in Trinity College Dublin to Google. With their ambisonic surround sound technology, we can start bringing immersive audio to VR.

In addition, we’re thrilled to have the Tilt Brush team join our family. With its innovative approach to 3D painting, Tilt Brush won last year’s Proto Award for Best Graphical User Interface. We’re looking forward to having them at Google, and building great apps together.

Ultimately, today’s updates are about making VR better together. Join the fold, and let’s have some fun.

Categories: Programming

Drive app installs through App Indexing

Android Developers Blog - Thu, 04/16/2015 - 18:04

Posted by Lawrence Chang, Product Manager

You’ve invested time and effort into making your app an awesome experience, and we want to help people find the great content you’ve created. App Indexing has already been helping people engage with your Android app after they’ve installed it — we now have 30 billion links within apps indexed. Starting this week, people searching on Google can also discover your app if they haven’t installed it yet. If you’ve implemented App Indexing, when indexed content from your app is relevant to a search done on Google on Android devices, people may start to see app install buttons for your app in search results. Tapping these buttons will take them to the Google Play store where they can install your app, then continue straight on to the right content within it.

App installs through app indexing

With the addition of these install links, we are starting to use App Indexing as a ranking signal for all users on Android, regardless of whether they have your app installed or not. We hope that Search will now help you acquire new users, as well as re-engage your existing ones. To get started, visit g.co/AppIndexing and to learn more about the other ways you can integrate with Google Search, visit g.co/DeveloperSearch.

Categories: Programming

Announcing General Availability of Azure Premium Storage

ScottGu's Blog - Scott Guthrie - Thu, 04/16/2015 - 18:01

I’m very excited to announce the general availability release of Azure Premium Storage. It is now available with an enterprise grade SLA and is available for everyone to use.

Microsoft Azure now offers two types of storage: Premium Storage and Standard Storage. Premium Storage stores data durably on Solid State Drives (SSDs) and provides high performance, low latency, disk storage with consistent performance delivery guarantees.

image

Premium Storage is ideal for I/O-sensitive workloads - and is especially great for database workloads hosted within Virtual Machines.  You can optionally attach several premium storage disks to a single VM, and support up to 32 TB of disk storage per Virtual Machine and drive more than 64,000 IOPS per VM at less than 1 millisecond latency for read operations. This provides an incredibly fast storage option that enables you to run even more workloads in the cloud.

Using Premium Storage, Azure now offers the ability run more demanding applications - including high-volume SQL Server, Dynamics AX, Dynamics CRM, Exchange Server, MySQL, Oracle Database, IBM DB2, MongoDB, Cassandra, and SAP solutions. Durability

Durability of data is of utmost importance for any persistent storage option. Azure customers have critical applications that depend on the persistence of their data and high tolerance against failures. Premium Storage keeps three replicas of data within the same region, and ensures that a write operation will not be confirmed back until it has been durably replicated. This is a unique cloud capability provided only be Azure today.

In addition, you can also optionally create snapshots of your disks and copy those snapshots to a Standard GRS storage account - which enables you to maintain a geo-redundant snapshot of your data that is stored > 400 miles away from your primary Azure region for disaster recovery purposes. Available Regions

Premium Storage is available today in the following Azure regions:

  • West US
  • East US 2
  • West Europe
  • East China
  • Southeast Asia
  • West Japan

We will expand Premium Storage to run in all Azure regions in the near future. Getting Started

You can easily get started with Premium Storage starting today. Simply go to the Microsoft Azure Management Portal and create a new Premium Storage account. You can do this by creating a new Storage Account and selecting the “Premium Locally Redundant” storage option (note: this option is only listed if you select a region where Premium Storage is available).

Then create a new VM and select the “DS” series of VM sizes. The DS-series of VMs are optimized to work great with Premium Storage. When you create the DS VM you can simply point it at your Premium Storage account and you’ll be all set. Learning More

Learn more about Premium Storage from Mark Russinovich's blog post on today's release.  You can also see a live 3 minute demo of Premium Storage in action by watching Mark Russinovich’s video on premium storage. In it Mark shows both a Windows Server and Linux VM driving more than 64,000 disk IOPS with low latency against a durable drive powered by Azure Premium Storage.

image

You can also visit the following links for more information:

Summary

We are very excited about the release of Azure Premium Storage. Premium Storage opens up so many new opportunities to use Azure to run workloads in the cloud – including migrating existing on-premises solutions.

As always, we would love to hear feedback via comments on this blog, the Azure Storage MSDN forum or send email to mastoragequestions@microsoft.com.

Hope this helps,

Scott

omni
Categories: Architecture, Programming

Paper: Large-scale cluster management at Google with Borg

Joe Beda (@jbeda): Borg paper is finally out. Lots of reasoning for why we made various decisions in #kubernetes. Very exciting.

The hints and allusions are over. We now have everything about Google's long rumored Borg project in one iconic Google style paper: Large-scale cluster management at Google with Borg.

When Google blew our minds by audaciously treating the Datacenter as a Computer it did not go unnoticed that by analogy there must be an operating system for that datacenter/computer.

Now we have the story behind a critical part of that OS:

Google's Borg system is a cluster manager that runs hundreds of thousands of jobs, from many thousands of different applications, across a number of clusters each with up to tens of thousands of machines.

It achieves high utilization by combining admission control, efficient task-packing, over-commitment, and machine sharing with process-level performance isolation. It supports high-availability applications with runtime features that minimize fault-recovery time, and scheduling policies that reduce the probability of correlated failures. Borg simplifies life for its users by offering a declarative job specification language, name service integration, real-time job monitoring, and tools to analyze and simulate system behavior.

We present a summary of the Borg system architecture and features, important design decisions, a quantitative analysis of some of its policy decisions, and a qualitative examination of lessons learned from a decade of operational experience with it.

Virtually all of Google’s cluster workloads have switched to use Borg over the past decade. We continue to evolve it, and have applied the lessons we learned from it to Kubernetes

The next version of Borg was called Omega and Omega is being rolled up into Kubernetes (steersman, helmsman, sailing master), which has been open sourced as part of Google's Cloud initiative.

Note how the world has changed. A decade ago when Google published their industry changing Big Table and Map Reduce papers they launched a thousand open source projects in response. Now we are not only seeing Google open source their software instead of others simply copying the ideas, the software has been released well in advance of the paper describing the software.

The future is still in balance. There's a huge fight going on for the future of what software will look like, how it is built, how it is distributed, and who makes the money. In the search business keeping software closed was a competitive advantage. In the age of AWS the only way to capture hearts and minds is by opening up your software. Interesting times.

Related Articles
Categories: Architecture

Why Your Million Dollar Idea Is Worthless

Making the Complex Simple - John Sonmez - Thu, 04/16/2015 - 16:00

In this video, I talk about why I think some multimillion dollar ideas are worthless, and how some crappy ideas are worth millions. Full transcript: John: Hey, this is John Sonmez from Simpleprogrammer.com and today I want to tell you why your million dollar idea is worthless. So that’s right. Every time that I’m talking […]

The post Why Your Million Dollar Idea Is Worthless appeared first on Simple Programmer.

Categories: Programming

Creating Better User Experiences on Google Play

Android Developers Blog - Thu, 04/16/2015 - 03:44

Posted by Eunice Kim, Product Manager for Google Play

Whether it's a way to track workouts, chart the nighttime stars, or build a new reality and battle for world domination, Google Play gives developers a platform to create engaging apps and games and build successful businesses. Key to that mission is offering users a positive experience while searching for apps and games on Google Play. Today we have two updates to improve the experience for both developers and users.

A global content rating system based on industry standards

Today we’re introducing a new age-based rating system for apps and games on Google Play. We know that people in different countries have different ideas about what content is appropriate for kids, teens and adults, so today’s announcement will help developers better label their apps for the right audience. Consistent with industry best practices, this change will give developers an easy way to communicate familiar and locally relevant content ratings to their users and help improve app discovery and engagement by letting people choose content that is right for them.

Starting now, developers can complete a content rating questionnaire for each of their apps and games to receive objective content ratings. Google Play’s new rating system includes official ratings from the International Age Rating Coalition (IARC) and its participating bodies, including the Entertainment Software Rating Board (ESRB), Pan-European Game Information (PEGI), Australian Classification Board, Unterhaltungssoftware Selbstkontrolle (USK) and Classificação Indicativa (ClassInd). Territories not covered by a specific ratings authority will display an age-based, generic rating. The process is quick, automated and free to developers. In the coming weeks, consumers worldwide will begin to see these new ratings in their local markets.

To help maintain your apps’ availability on Google Play, sign in to the Developer Console and complete the new rating questionnaire for each of your apps. Apps without a completed rating questionnaire will be marked as “Unrated” and may be blocked in certain territories or for specific users. Starting in May, all new apps and updates to existing apps will require a completed questionnaire before they can be published on Google Play.

An app review process that better protects users

Several months ago, we began reviewing apps before they are published on Google Play to better protect the community and improve the app catalog. This new process involves a team of experts who are responsible for identifying violations of our developer policies earlier in the app lifecycle. We value the rapid innovation and iteration that is unique to Google Play, and will continue to help developers get their products to market within a matter of hours after submission, rather than days or weeks. In fact, there has been no noticeable change for developers during the rollout.

To assist in this effort and provide more transparency to developers, we’ve also rolled out improvements to the way we handle publishing status. Developers now have more insight into why apps are rejected or suspended, and they can easily fix and resubmit their apps for minor policy violations.

Over the past year, we’ve paid more than $7 billion to developers and are excited to see the ecosystem grow and innovate. We’ll continue to build tools and services that foster this growth and help the developer community build successful businesses.

Join the discussion on

+Android Developers
Categories: Programming

Helping developers connect with families on Google Play

Android Developers Blog - Thu, 04/16/2015 - 03:43

Posted by Eunice Kim, Product Manager, Google Play

There are thousands of Android developers creating experiences for families and children — apps and games that broaden the mind and inspire creativity. These developers, like PBS Kids, Tynker and Crayola, carefully tailor their apps to provide high quality, age appropriate content; from optimizing user interface design for children to building interactive features that both educate and entertain.

Google Play is committed to the success of this emerging developer community, so today we’re introducing a new program called Designed for Families, which allows developers to designate their apps and games as family-friendly. Participating apps will be eligible for upcoming family-focused experiences on Google Play that will help parents discover great, age-appropriate content and make more informed choices.

Starting now, developers can opt in their app or game through the Google Play Developer Console. From there, our team will review the submission to verify that it meets the Designed for Families program requirements. In the coming weeks, we’ll be adding new ways to promote family content to users on Google Play — we’ll have more to share on this soon.

Join the discussion on

+Android Developers
Categories: Programming

Power Great Gaming with New Analytics from Play Games

Android Developers Blog - Thu, 04/16/2015 - 03:30

By Ben Frenkel, Google Play Games team

A few weeks ago at the Game Developers Conference (GDC), we announced Play Games Player Analytics, a new set of free reports to help you manage your games business and understand in-game player behavior. Today, we’re excited to make these new tools available to you in the Google Play Developer Console.

Analytics is a key component of running a game as a service, which is increasingly becoming a necessity for running a successful mobile gaming business. When you take a closer look at large developers that do this successfully, you find that they do three things really well:

  • Manage their business to revenue targets
  • Identify hot spots in their business metrics so they can continuously focus on the game updates that will drive the most impact
  • Use analytics to understand how players are progressing, spending, and churning

“With player engagement and revenue data living under one roof, developers get a level of data quality that is simply not available to smaller teams without dedicated staff. As the tools evolve, I think Google Play Games Player Analytics will finally allow indie devs to confidently make data-driven changes that actually improve revenue.”

Kevin Pazirandeh
Developer of Zombie Highway 2

With Player Analytics, we wanted to make these capabilities available to the entire developer ecosystem on Google Play in a frictionless, easy-to-use way, freeing up your precious time to create great gaming experiences. Small studios, including the makers of Zombie Highway 2 and Bombsquad, have already started to see the benefits and impact of Player Analytics on their business.

Further, if you integrate with Google Play game services, you get this set of analytics with no incremental effort. But, for a little extra work, you can also unlock another set of high impact reports by integrating Google Play game services Events, starting with the Sources and Sinks report, a report to help you balance your in-game economy.

If you already have a game integrated with Google Play game services, go check out the new reports in the Google Play Developer Console today. For everyone else, enabling Player Analytics is as simple as adding a handful of lines of code to your game to integrate Google Play game services.

Manage your business to revenue targets

Set your spend target in Player Analytics by choosing a daily goal

To help assess the health of your games business, Player Analytics enables you to select a daily in-app purchase revenue target and then assess how you're doing against that goal through the Target vs Actual report depicted below. Learn more.

Identify hot spots using benchmarks with the Business Drivers report

Ever wonder how your game’s performance stacks up against other games? Player Analytics tells you exactly how well you are doing compared to similar games in your category.

Metrics highlighted in red are below the benchmark. Arrows indicate whether a metric is trending up or down, and any cell with the icon can be clicked to see more details about the underlying drivers of the change. Learn more.

Track player retention by new user cohort

In the Retention report, you can see the percentage of players that continued to play your game on the following seven days after installing your game.

Learn more.

See where players are spending their time, struggling, and churning with the Player Progression report

Measured by the number of achievements players have earned, the Player Progression funnel helps you identify where your players are struggling and churning to help you refine your game and, ultimately, improve retention. Add more achievements to make progression tracking more precise.

Learn more.

Manage your in-game economy with the Sources and Sinks report

The Sources and Sinks report helps you balance your in-game economy by showing the relationship between how quickly players are earning or buying and using resources.

For example, Eric Froemling, one man developer of BombSquad, used the Sources & Sinks report to help balance the rate at which players earned and spent tickets.

Read more about Eric’s experience with Player Analytics in his recent blog post.

To enable the Sources and Sinks report you will need to create and integrate Play game services Events that track sources of premium currency (e.g., gold coins earned), and sinks of premium currency (e.g., gold coins spent to buy in-app items).

Categories: Programming