Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Show Me Your Math

Herding Cats - Glen Alleman - 6 hours 3 min ago

IScreen Shot 2014-11-21 at 12.32.21 PMn a recent email exchange, the paper by Todd Little showing projects that exceeded their estimates was used as an example for how porrly we estimate, and ultimately one of the reasons to adopt the #NoEstimates paradigm of making decisions in the absence of estimates of cost, schedule, and the probability that the needed capabilities will show up on time and be what the customer wanted.

Sherlock here had it right. This picture by the way  is borrowed from Mike Cohn's eBook of 101 quotes for agile.

I've written about Little's paper before, but it's worth repeating.

It's very sporty to use examples of bad mathematics, bad management, bad processes, bad practices as the basis for something new. This is essential the basis the book Hard Facts, dangerous Half-Truths: Profiting from Evidence-Based Management. When we start talking about something new and disruptive in the absence of the data, facts, root causes, and underlying governance and principles, we're treading on very thin ground in terms of credibility.

Here's the core principle of all software development

Customers exchange money for value. The definition of  value needs to be in units of measure that allows them to make decisions about the future value of that value. That value is exchanged for a cost. A future cost as well. 

Both this future cost and future value are probabilistic in nature, due to the uncertainties in the work processes, technologies, markets, productivity and all the ...ilities associated with project work. In the presence of uncertainty, nothing is for certain - a tautology. There are two type of uncertainty - reducible and irreducible. Reducibe we can do something about. We can spend money to reduce the risk associated with the uncertainty. Irreducible, we can't. We can only have marginmanagement reserve, or a Plan B.

To make decisions in the presence of these uncertainties - reducible and irreducible - we need to estimate the uncertainty, the cost of handling the uncertainty, and the value produced by the work driven by these uncertainties. When we fail to make these estimates, the uncertainties don't go away. When we slice the work into small chunks, we might also slice the uncertainties into small chunks - this is the basis of agile and the paradigm of Little Bets. But the uncertainties are still there, unless we've explicitly bought them down or installed margin and reserve. They didn't go away. And what you don't know - or choose to explicitly ignore - can hurt you.

So Sherlock is right don't put forth a theory without the data. 

 

Related articles Should I Be Estimating My Work? Basis of #NoEstimates are 27 Year Old Reports Estimating Guidance Baloney Claims: Pseudo - science and the Art of Software Methods Assessing Value Produced By Investments Trust but Verify Anecdotal Evidence is not Actually Evidence
Categories: Project Management

Models, Models Everywhere

FullSizeRender

CMMI, ITIL, Six Sigma, Agile, waterfall, software development life cycle and eXtreme Programming . . .what do all of these terms have in common?  They are models.  In a perfect world, models are abstractions that we find useful to explain the world around us.  Models work by rendering complex ideas more simply.  For example, both a road map and picture rendered in Google Earth are models.  Two very different types of models: an abstraction of a set roads, buildings, parks and plants that exhibit can provide more information than rendering.  Real life is complex, Google Earth is less complex and the road maps are the least complex.  Simplifying a concept to a point allows understanding, while too much simplification renders the concept as a pale reflection.  Oversimplification can lead to misunderstandings or misconceptions, for example the conception that Agile methods are undisciplined or that waterfall methods are bureaucratic.  Both of these are misconceptions (individual implementations are another story).  According to Kenji Haranabe, software development is a form of communication game.  Communication requires that groups understand a concept so that it can be implemented.  Communication and understanding requires finding a level where common understanding based on common words can occur.  Words provide the simplification of real life into a form of model.

Unfortunately it is difficult to determine when enough simplification is enough.  Oversimplification of ideas can allow trouble to creep in.  Oversimplification can water down a concept to the point that it can not provide useful information to be used operationally.  An example of a very simple model is the five maturity levels commonly connected to the CMMI.  The maturity levels build awareness, but provide little operational information.  I do not know how many times I have heard people talk about an individual maturity level as if the name given to that level was all you needed to know about a maturity level.   The less simplified version with process areas, goals and practices provides substantial operational information.  ‘Operationalizing’ an overly simplified model will yield unanticipated results and that is not what we want from a model.  I once built a model of the battleship Missouri that had horrible directions (directions are a model of a model), I used three firecrackers to remodel the thing I ended up with (which was not a very good model).

Models abound in the world of information technology.  If we don’t have a model for it, we at least have TLA (three letter acronym) for it and are working on a model that will incorporate it.  The models that have lasting power provide structure, discipline and control.  They’re also used as a tool to guide work (tightly or loosely depends on the organization) and as a scaffold to define improvements in a structured manner.  Models are powerful; molding, bending and guiding legions of IT practitioners.  The dark side of this power is that the choice of models can be definitional statement for a group or organization.  Selecting a model can elicit all of the passions of politics or religion.  Just consider the emotions you feel when describing Six Sigma, CMMI, eXtreme Programming, waterfall or Agile.  One of those words will probably be a hot button.  The power of models can become seductive and entrenched so as to reduce your ability to change and adapt as circumstances demand.  A model is never a goal!  Define what your expectations are for the model or models that you are you using in business terms.  Examples of goals I would expect are of increased customer satisfaction, improved quality or faster time-to-market, rather than attaining CMMI Maturity Level 2 or implementing daily builds.  Know what you want to accomplish then integrate the models and tactics to achieve those goals.  Do not let the tool be the goal.

Models are powerful, useful tools to ensure understanding, they provide structure and discipline.  Perform a health check.  Questions to ask about the models in your organization begin with:

  1. Is there is a deep enough understanding of the model being used? – With knowledge comes the background to operationalize the model.
  2. What are your expectations of value from the model? – Knowing what you want from a model helps ensure that the model does not become the goal and that you retain the ability to be flexible.

There are many other questions you can ask on your heath check, however if you can’t answer these questions well stop and reassess, re-evaluate, re-train and re-plan your effort.

CMMI, ITIL, Six Sigma, Agile, waterfall, software development life cycle and eXtreme Programming. . . powerful tools or a straight jacket. Which is it for you?


Categories: Process Management

Want to preview our new DataGrid for Xamarin.Forms?

Eric.Weblog() - Eric Sink - Fri, 11/21/2014 - 19:00
tl;dr

Zumero.DataGrid is a Xamarin.Forms control for displaying data in rows and columns.

If you would be interested in testing and previewing a trial version of this product, email us:

  • datagrid-testing@zumero.com

Details
  • We plan to submit this to the Xamarin Component Store where (pending Xamarin's approval) it would be a commercial product. Pricing has not been set.

  • You'll receive a file in Xamarin Component format (a .xam file) which you will need to install into your IDE.
    (command line: xamarin-component.exe install ZumeroDataGrid-1.0.0.xam)
    The component file contains the libraries for all platforms, plus documentation and samples.

  • The test component is a trial version. It overlays the contents of some (randomly chosen) grid cells with an orange label that says "Trial". It is otherwise identical to the full version.

  • This component is unrelated to the one I mentioned in a blog entry back on 3 October. That approach was based on drawing. It was not capable of hosting Xamarin.Forms views, so it couldn't do things like in-place editing. This component is entirely different.

  • We are eager to hear feedback of any kind.

Features
  • Support for cell contents to be any Xamarin.Forms.View
  • Scrolling, both horizontal and vertical
  • Optional top frozen header row
  • Optional left frozen column
  • Separate View template for each column
  • View recycling (If you have 1000 rows but only 10 can be visible, DataGrid creates 10 Views and reuses them.)
  • Data row objects (provided to the DataGrid as an IList<object>)
  • Strong integration with the Xamarin.Forms binding mechanism
  • Automatic updates if an ObservableCollection is used for rows and columns
  • Configurable row height (for all rows in the grid)
  • Configurable column widths (can be different for each column)
  • Configurable spacing between rows and columns
  • Works with C#, F#, and XAML
  • Support for iOS, Android, and Windows Phone
  • Full API documentation in MonoDoc
  • Technical support service from Zumero
Screenshots

 

Stuff The Internet Says On Scalability For November 21st, 2014

Hey, it's HighScalability time:


Sweet dreams brave Philae. May you awaken to a bright-throned dawn for one last challenge.

 

  •  80 million: bacteria xferred in a juicy kiss;
  • Quotable Quotes:
    • James Hamilton: Every day, AWS adds enough new server capacity to support all of Amazon's global infrastrucrture when it was a $7B annual revenue enterprise.
    • @iglazer: What is the test that could most destroy your business model?  Test for that. @adrianco #defragcon
    • @zhilvis: Prefer decoupling over duplication. Coupling will kill you before duplication does by @ICooper #buildstufflt
    • @jmbroad: "Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful." ~George Box
    • @RichardWarburto: Optimisation maybe premature but measurement isn't.
    • @joeerl: Hell hath no version numbers - the great ones saw no need for version numbers - they used port numbers instead. See, for example, RFC 821,
    • JustCallMeBen: tldr: queues only help to flatten out burst load. Make sure your maintained throughput is high enough.
    • @rolandkuhn: «the event log is a database of the past, not just of the present» — @jboner at #reactconf
    • @ChiefScientist: CRUD is dead. -- @jboner #reactconf 
    • @fdmts: 30T of flash disks cabled up and online.  Thanks @scalableinfo!
    • monocasa: Immutable, statically linked, minimal system images running micro services on top of a hypervisor is a very old concept too. This is basically the direction IBM went in the 60's with their hypervisors and they haven't looked back.
    • Kiril Savino: Scaling is the process of decoupling load from latency.

  • Perhaps they were controlled by a master AI? Google and Stanford Built Similar Neural Networks Without Knowing It: Neural networks can be plugged into one another in a very natural way. So we simply take a convolutional neural network, which understands the content of images, and then we take a recurrent neural network, which is very good at processing language, and we plug one into the other. They speak to each other—they can take an image and describe it in a sentence.

  • You know how you never really believed the view in MVC was ever really separate? Now this is MVC. WatchKit apps run on the iPhone as an extension, only the UI component runs on the watch. XWindows would be so proud.

  • Shopify shows how they Build an Internal Cloud with Docker and CoreOS: Shopify is a large Ruby on Rails application that has undergone massive scaling in recent years. Our production servers are able to scale to over 8,000 requests per second by spreading the load across 1700 cores and 6 TB RAM.

  • Machine learning isn't just about creating humavoire AIs. It's a technology, like electricity, that will transform everything it affixes with its cyclops gaze. Here's a seemingly mundane example from Google, as discussed on the Green (Low Carbon) Data Center Blog. Google has turned inward, applying Machine Learning to its data center fleet. The result:  Google achieved from 8% to 25% reduction in its energy used to cool the data center with an average of 15%.  Who wouldn’t be excited to save an average of 15% on their cooling energy costs by providing new settings to run the mechanical plant? < And this is how the world will keep those productivity increases reaching skyward.

  • Does anyone say "I love my water service"? Or "I love my garbage service"? Then why would anyone say "I love Facebook"? That's when you've arrived. When you are so much a part of the way things are that people don't even think of loving them or not. They just are. The Fall of Facebook

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Scaling Agile – SAFe Program Increment: Inspect and Adapt

OLYMPUS DIGITAL CAMERA

Most Agile frameworks are built on the basis empirical models of process control. These, in turn, are based on periodic observation and adaptation. Each instantiation of the process may be different due to the circumstances any specific team faces. The empirical mode is often compared to the defined process approach, which posits a standard process for all teams that evolves based on the needs of the collective whole rather than a team or small group of teams. Scaled Agile Framework Enterprise (SAFe) program increments are no different. Program increments are similar to a large Scrum sprint beginning with planning and completing with a demonstration and retrospective, known as inspect and adapt. The empirical process of inspect and adapt is one of the core principles in SAFe. At the team level, each sprint culminates with the classic demonstration and retrospective. The same type of discipline is leveraged for program increments. PI level inspect and adapt is composed of three components:

  1. The teams involved in the program increment demonstrate the system with the new features, architectural changes and non-functional requirements that have been completed during the PI. The goal of the demonstration is to evoke feedback based on the overall current status of the overall application from the Agile release train (ART) stakeholders.
  2. Metrics Review. Program metrics are reviewed and discussed as a team. The goal of the metrics review and the ensuing discussion is to ensure that everyone involved understands performance based on tactical experience and measured performance. The combination of experience and measurement helps the team see through their individual cognitive biases.
  3. Retrospective/Problem Solving Workshop. The problem solving workshop provides a platform for the PI participants to first identify the issues experienced during the increment, then to identify which of those that they would like to solve and finally to develop a corrective action plan. SAFe recommends that a structured, root cause analysis is used to analyze the issue and define a corrective action plan (the preferred method uses the “five whys” and Ishikawa Diagrams/Fishbone diagrams – both of these are tried and true techniques used in the quality and process improvement arena). The goal of the process is to generate tangible improvement actions that can be taken into the release planning event for the next program increment.

The inspect and adapt process in SAFe is a means of generating feedback on the product that is being developed and the processes being used to do the work. Feedback is a means of ensuring the Agile release train stays on track. Inspect and Adapt provide feedback so that changes can be made the trajectory of the product being build and to make changes in how work is being done.


Categories: Process Management

Episode 215: Gang of Four – 20 Years Later

Johannes Thönes talks with Erich Gamma, Ralph Johnson and Richard Helm from the Gang of Four about the 20th anniversary of their book Design Patterns. They discuss the following topics: the definition of a design pattern and each guest’s favorite design pattern; the origins of the book in architecture workshops; the writing of the book […]
Categories: Programming

Google Analytics Demos & Tools

Google Code Blog - Thu, 11/20/2014 - 18:05
As a member of the Google Analytics Developer Relations team, I often hear from our community that they want to do more with GA but don't always know how. They know the basics but want to see full examples and demos that show how things should be built.

Well, we've been listening, and today I'm proud to announce the launch of Google Analytics Demos & Tools, a new website geared toward helping Google Analytics developers tackle the challenges they face most often.

The site aims to make experienced developers more productive (we use it internally all the time) and to show new users what's possible and inspire them to leverage the platform to improve their business through advanced measurement and analysis.

Google Analytics Developers Image
Some highlights of the site include a full-featured Enhanced Ecommerce demo with code samples for both Google Analytics and Google Tag Manager, a new Account Explorer tool to help you quickly find the IDs you need for various Google Analytics services and 3rd party integrations, several examples of easy-to-build custom dashboards, and some old favorites like the Query Explorer.

Google Analytics Demos & Tools not only shows off Google Analytics technologies, it also uses them under the hood. All pages that require authorization use the Embed API to log users in, and usage statistics, including outbound link clicks, authorization status, client-side exceptions, and numerous other user interaction events are measured using analytics.js.

Every page that makes use of a Google Analytics technology lists that information in the footer, making it easy for developers to see how all the pieces fit together. In addition, the entire site is open sourced and available on Github, so you can dive in and see exactly how everything works.

Feedback is welcome and appreciated!

Posted By Philip Walton, Developer Programs Engineer
Categories: Programming

Built to Kill

Software Requirements Blog - Seilevel.com - Thu, 11/20/2014 - 17:00
During a conversation with a lead engineer working on the Google self-driving car project, it was mentioned that the car would be programmed to consistently break the speed limit. On average, the car will travel 10 mph over any posted speed limit. Why design a car to deliberately break the law? Safety, primarily; since every […]
Categories: Requirements

How Do You Estimate What You Don’t Know?

Making the Complex Simple - John Sonmez - Thu, 11/20/2014 - 16:00

In this video I answer a question about how to estimate something that you don’t know how to do yet.

The post How Do You Estimate What You Don’t Know? appeared first on Simple Programmer.

Categories: Programming

musiXmatch drives user engagement through innovation

Android Developers Blog - Thu, 11/20/2014 - 15:54

Posted by Leticia Lago, Google Play team

musiXmatch is an app that offers Android users the unique and powerful feature FloatingLyrics. FloatingLyrics pops up a floating window showing synched lyrics as users listen to tracks on their favorite player and music services. It’s achieved through a seamless integration with intents on the platform, something that’s technically possible only on Android.

As a result musiXmatch has seen “a dramatic increase in terms of engagement’, says founder Max Ciociola, “which has been two times more active users and even two times more the average time they spend in the app.”

The ability to deliver lyrics to a range of different devices — such as Chromecast, Android TV, and Android Wear — is creating opportunities for musiXmatch. It’s helping them turn their app into a smart companion for their users and getting them closer to their goal of reaching 100 million people.

In the following video, Max and Android engineer Sebastiano Gottardo talk about the unique capabilities that Android offers to musiXmatch:

To learn about achieving great user engagement and retention and reaching more users through different form factors, be sure to check out these resources:

  • Convert installs to active users — Watch this video to hear from Matteo Vallone, Partner Development Manager for Google Play, about the best practices in engaging and retaining app users using intents, identity, context, and rich notifications as well as delivering a cross-platform user experience.
  • Expanding to new form factors: Tablet, Wear & TV — Watch this panel discussion with Google experts talking about cross-platform opportunities and answering developer questions.
Join the discussion on

+Android Developers
Categories: Programming

Populist versus Technical View of Problems

Herding Cats - Glen Alleman - Thu, 11/20/2014 - 04:17

On a twitter discussions and email exchanges there is a notion of populist books versus technical books  used to address issues and problems encountered in our project management domains. My recent book Performance-Based Project Management® is a populist book. There are principles, practices, and processes in the book that can be put to use on real projects, but very few equations and numbers. It's mostly narrative about increasing the probability of project success. But the to calculate that probability based on other numbers, processes, and systems is not there. That's the realm of Technical books and journal papers.

The content of the book was developed with the help of editors at American Management Association, the publisher. The Acquisition Editor contacted me about writing a book for the customers of AMA. He explained up front AMA is in the money making business of selling books. And that although I may have many good ideas, even ideas that people might want to read about, it's an AMA book and I'll be getting lots of help developing those ideas into a book that will make money for AMA.

The distinction between a populist book and a technical book are the differences between a book that addresses a broad audience with a general approach to the topic and a deep dive book focused on a narrow audience.

But one other disticntion is for most of the technical approaches, some form of calculation takes place to support the materials found in the populist material. One simple example is estimating. There are estimating articles and some books that lay out the principles of estimates. We have those in our domain in the form of guidelines and a few texts. But to calculate the Estimate To Complete in a statistically sound manner, technical knowledge and the underlying mathematics of non-linear, non-stationary, stochastic processes (Monte Carlo Simulation of the projects work structure) is needed. 

Two examples of populist versus technical

Two from my past two from my current work. 

Screen Shot 2014-11-19 at 8.02.28 PM

These two books are about the same topic. General relativity and its description of the shape of our universe.  One is a best selling popularization of the topic, found in many home libraries of those interested in this fascinating topic. The one on the left is on my shelf from a graduate school course on General Relativity along with Misner, Thorne, and Wheeler's Gravity.

Dense is an understatement for the math and the results of the book on the left. So if you want to calculate something about a rapidly spinning Black Hole, you're going to need that book. The book on the right will talk about those Black Holes in non-mathematical terms, but no numbers come out from that description.

Screen Shot 2014-11-19 at 8.03.36 PMThe book on the left is about probabilistic processes in everyday life that we misunderstand or are biased to misunderstand. The many cognitive biases we use to convince ourselves we are making the right decisions on projects are illustrated through nice charts and graphs.

We use the book on the left in our work with non-stationary stochastic process of complex project cost and schedule modeling. Making these decisions is critical to quantifying how technical and economics risk may affect a system's cost. This book is a treatment of how probability methods are applied to model, measure, and manage risk, schedule, and cost engineering for advanced systems. Garvey's shows how to construct models, do the calculations, and make decisions with these calculations.

Here's The Point - Finally

If you come across a suggestion that decisions can be made in the absence of knowing anything about the future numbers or about actually doing the math, put that suggestion in the class of populist descriptions of a complex topic.

If you can't calculate something, then you can't make a decision based on the evidence represented by numbers. If you can't decide based on the math, then the only way left is to decide on intuition, hunchs, opinion, or some other seriously flawed non-analytical basis.

Just a reminder from Mr. Deming stated in yesterday's post

Deming

If it's not your money, there's likley an expectation that those providing the money are intestered in the calculations needed to make those decisions. 

Action accordingly

Related articles Estimating Guidance When the Solution to Bad Management is a Bad Solution Measures of Program Performance Should I Be Estimating My Work? Decision Making Without Estimates? Trust but Verify Anecdotal Evidence is not Actually Evidence Why Trust is Hard Baloney Claims: Pseudo - science and the Art of Software Methods
Categories: Project Management

Chinese Developers Can Now Offer Paid Applications to Google Play Users in More Than 130 countries

Android Developers Blog - Thu, 11/20/2014 - 03:07

By Ellie Powers, product manager for Google Play

Google Play is the largest digital store for Android users to discover and purchase their favorite mobile app and games, and the ecosystem is continuing to grow globally. Over the past year, we’ve expanded the list of countries where app developers can sign up to be merchants on Google Play, totaling 60 countries, including Lebanon, Jordan, Oman, Pakistan, Puerto Rico, Qatar and Venezuela most recently.

As part of that continued effort, we’re excited to announce merchant support in China, enabling local developers to export and sell their apps to Google Play users in more than 130 countries. Chinese developers can now offer both free and paid applications through various monetization models, including in-app purchasing and subscriptions. For revenue generated on Google Play, developers will receive payment to their Chinese bank accounts via USD wire transfers.

If you develop Android apps in China and want to start distributing your apps to a global audience through Google Play, visit play.google.com/apps/publish and register as a developer. If you want to sell apps and in-app products, you'll need to also sign up for a Google Wallet merchant account, which is available on the “Revenue” page in the Google Play Developer Console. After you’ve uploaded your apps, you can set prices in the Developer Console and later receive reports on your revenue. You’ll receive your developer payouts via wire transfer. For more details, please visit our developer help center.

We look forward to continuing to roll out Google Play support to developers in many more countries around the world.

中国开发者可以向全球130个国家的Google Play用户提供付费应用啦

发表者:Ellie Powers, Google Play产品经理

Google Play是一个可让Android用户发现和购买他们喜爱的移动应用程序和游戏的全球最大的应用商店,这个生态系统在全球迅速成长。过去一年中,我们已经扩展到60个国家,让应用程序开发人员可以注册成为 Google Play的商家,其中新近支持的国家包括黎巴嫩、约旦、阿曼、巴基斯坦、波多黎各、卡塔尔和委内瑞拉。

作为持续改进 Google Play努力的一部分,我们很高兴地宣布在中国增加了对商家的支持,让中国的开发者能售卖应用程序到130个国家的 Google Play 用户。中国的开发者现在可以提供通过各种盈利模式的免费和付费应用,包括应用内购买和订阅。在 Google Play 产生的营收将通过美元电汇的方式支付给开发者的中国的银行账户。

如果你在中国开发Android应用程序,并希望通过 Google Play 把应用程序推广到全球,请登录play.google.com/apps/publish 并建立你的 Google Play 开发者账户。如果你想售卖付费的应用程序和应用程序内的产品,则需要再注册一个Google 电子钱包商家帐户,通过Google Play开发者控制台里的”营收”页面进行设置。上传应用程序后,你可以通过开发者控制台设定价格,之后就可以收到营收报告,你将会通过电汇的方式获得收入。

我们将继续增加更多 Google Play 商家支持的国家,敬请关注。

更多详情,请访问我们的开发者帮助中心

Join the discussion on

+Android Developers
Categories: Programming

Scaling Agile – SAFe Agile Release Planning: Program Increment Objectives

Program increment (PI) objectives are summaries.

Program increment (PI) objectives are summaries.

Program increment (PI) objectives are summaries of business and technical goals at multiple levels within a program increment.  In the Scaled Agile Framework (SAFe) there are two levels of PI objectives. Team PI objectives reflect the summary of the specific business and technical goals that have been planned during release planning. Program PI Objectives are at a higher level of abstraction, reflecting the summary of the teams PI objectives to describe the overall program increment. The process of generating the PI objectives serves three primary purposes. They are to:

  1. Create alignment. Teams in the ART synthesize the visions (business context, product and architecture visions) presented at the beginning of the planning event along with the prioritized backlog to generate business and technical stories which are then planned. Stories and spikes are added and planned by the team until they reach capacity. When process of planning is complete the work that has been planned is summarized into the team PI objectives. PI objectives are communication vehicles for the team to share what they are doing to those outside the team and for other teams and stakeholders (those outside the team boundary) to consume and understand. The PI objectives are a stand in for a laundry list user stories that allows all of the ART teams to compare objectives.  The comparison process helps to ensure teams are aligned so that the program increment vision can be delivered. Program PI objectives are generated by synthesizing the teams PI objectives into program PI objectives. Program PI objectives are another tool to ensure all of the teams are aligned.
  2. Generate agreement of feasibility. The process of synthesizing the stories into team PI objectives and then from team PI objectives into program PI objectives is a mechanism to communicate what each team believes feasible in the program increment time frame. As team-level PI objectives are communicated and synthesized into program PI objectives, the ART has another opportunity to expose dependencies or to ensure that dependencies are understood.
  3. Set Expectations. PI objectives, whether at the team or program level, are commitments to perform. PI objectives can be thought of as line in the sand based on plans generated by the teams that are committed to doing the work (rather than plans and objectives made for the teams).

The release planning process is a mechanism for teams to plan the work they are going to do in a program increment. PI objectives are a mechanism to summarize the detail that planning creates so that everyone involved in the Agile release train can agree upon what will be delivered. At the end of a program increment those involved in the increment can reflect on the objectives both at a program and team level and use what was learned to continuously improve.


Categories: Process Management

Keeping Your Saved Games in the Cloud

Android Developers Blog - Wed, 11/19/2014 - 19:47

Posted by Todd Kerpelman, Developer Advocate

Saved Games Are the Future!

I think most of us have at least one or two games we play obsessively. Me? I'm a Sky Force 2014 guy. But maybe you're into matching colorful objects, battling monsters, or helping avians with their rage management issues. Either way, there's probably some game where you've spent hours upon hours upgrading your squad, reaching higher and higher levels, or unlocking every piece of bonus content in the game.

Now imagine losing all of that progress when you decide to get a new phone. Or reinstall your game. Yikes!

That's why, when Google Play Games launched, one of the very first features we included was the ability for users to save their game to the cloud using a service called the AppState API. For developers who didn't need an entire server-based infrastructure to run their game, but didn't want to lose players when they upgraded their devices, this feature was a real life-saver.

But many developers wanted even more. With AppState, you were limited to 4 slots of 256k of data each, and for some games, this just wasn't enough. So this past year at Google I/O, we launched an entirely new Saved Games feature, powered by Google Drive. This gave you huge amounts of space (up to 3MB per saved game with unlimited save slots), the ability to save a screenshot and metadata with your saved games, and some nice features like showing your player's saved games directly in the Google Play app.

...But AppState is Yesterday's News

Since the introduction of Saved Games, we've seen enough titles happily using the service and heard enough positive feedback from developers that we're convinced that Saved Games is the better offering and the way to go in the future. With that in mind, we've decided to start deprecating the old cloud save system using AppState and are encouraging everybody who's still using it to switch over to the new Saved Games feature (referred to in our libraries as "Snapshots").

What does this mean for you as a game developer?

If you haven't yet added Saved Games to your game, now would be the perfect time! The holidays are coming up and your players are going to start getting new devices over the next couple of months. Wouldn't it be great if they could take your game's progress with them? Unless, I guess, "not retaining users" is part of your business plan.

If you're already using the new Saved Games / Snapshot system, put your feet up and relax. None of these changes affect you. Okay, now put your feet down, and get back to work. You probably have a seasonal update to work on, don't you?

If you're using the old AppState system, you should start migrating your player's data over to the new Saved Games service. Luckily, it's easy to include both systems in the same game, so you should be able to migrate your users' data with their ever knowing. The process would probably work a little something like this:

  • Enable the new Saved Game service for your game by
    • Adding the Drive.SCOPE_APPFOLDER scope to your list of scopes in your GoogleApiClient.
    • Turning on Saved Games for your game in the Google Play Developer Console.
  • Next, when your app tries to load the user's saved game
    • First see if any saved game exists using the new Saved Games service. If there is, go ahead and use it.
    • Otherwise, grab their saved game from the AppState service.
  • When you save the user's game back to the cloud, save it using the new Saved Games service.
  • And that should be it! The next time your user loads up your game, it will find their saved data in the new Saved Games service, and they'll be all set.
  • We've built a sample app that demonstrates how to perform these steps in your application, so we encourage you to check it out.

In a few months, we will be modifying the old AppState service to be read-only. You'll still be able to read your user's old cloud save games and transfer them to the new Saved Games service, but you'll no longer be able to save games using the old service. We are evaluating early Q2 of 2015 to make this change, which should give you enough time to push your "start using Saved Games" update to the world.

If you want to find out more about Saved Games and how they work, feel free to review our documentation, our sample applications, or our Game On! videos. And we look forward to many more hours of gaming, no matter how many times we switch devices.

Join the discussion on

+Android Developers
Categories: Programming

Quote of the Day

Herding Cats - Glen Alleman - Wed, 11/19/2014 - 19:39

All ideas require credible evidence to be tested, suspect ideas require that even more so - Deep Inelastic Scattering, thesis adviser, University of California, 1978

Carl Sagan 1When the response to questions about the applicability of an idea is push back with accusations that those asking the questions in an attempt to determine the applicability and truth of the statement are somehow afraid of that truth, suggests there is little evidence as a test of those conjectures.

When there are proposals that ignore the principles of business, microeconomics, control systems theory, and are based on well know bad management practices, with well know and easy to apply corrective actions - there is no there, there

Deming 2

So without a testable process, in a testable domain, with evidence based assessment of appliability, outcomes, and benefits, any suggestion is opinion at best and blather at worse.

Related articles Trust but Verify Assessing Value Produced By Investments Should I Be Estimating My Work?
Categories: Project Management

Coding Android TV games is easy as pie

Android Developers Blog - Wed, 11/19/2014 - 19:28
Posted by Alex Ames, Fun Propulsion Labs at Google*

We’re pleased to announce Pie Noon, a simple game created to demonstrate multi-player support on the Nexus Player, an Android TV device. Pie Noon is an open source, cross-platform game written in C++ which supports:

  • Up to 4 players using Bluetooth controllers.
  • Touch controls.
  • Google Play Games Services sign-in and leaderboards.
  • Other Android devices (you can play on your phone or tablet in single-player mode, or against human adversaries using Bluetooth controllers).

Pie Noon serves as a demonstration of how to use the SDL library in Android games as well as Google technologies like Flatbuffers, Mathfu, fplutil, and WebP.

  • Flatbuffers provides efficient serialization of the data loaded at run time for quick loading times. (Examples: schema files and loading compiled Flatbuffers)
  • Mathfu drives the rendering code, particle effects, scene layout, and more, allowing for efficient mathematical operations optimized with SIMD. (Example: particle system)
  • fplutil streamlines the build process for Android, making iteration faster and easier. Our Android build script makes use of it to easily compile and run on on Android devices.
  • WebP compresses image assets more efficiently than jpg or png file formats, allowing for smaller APK sizes.

You can download the game in the Play Store and the latest open source release from our GitHub page. We invite you to learn from the code to see how you can implement these libraries and utilities in your own Android games. Take advantage of our discussion list if you have any questions, and don’t forget to throw a few pies while you’re at it!

* Fun Propulsion Labs is a team within Google that's dedicated to advancing gaming on Android and other platforms.

Join the discussion on

+Android Developers
Categories: Programming

Three Alternatives for Making Smaller Stories

When I was in Israel a couple of weeks ago teaching workshops, one of the big problems people had was large stories. Why was this a problem? If your stories are large, you can’t show progress, and more importantly, you can’t change.

For me, the point of agile is the transparency—hey, look at what we’ve done!—and the ability to change. You can change the items in the backlog for the next iteration if you are working in iterations. You can change the project portfolio. You can change the features. But, you can’t change anything if you continue to drag on and on and on for a give feature. You’re not transparent if you keep developing a feature. You become a black hole.

Managers start to ask, “What you guys doing? When will you be done? How much will this feature cost?” Do you see where you need to estimate more if the feature is large? Of course, the larger the feature, the more you need to estimate and the more difficult it is to estimate well.

Pawel Brodzinski said this quite well last year at the Agile conference, with his favorite estimation scale. Anything other than a size 1 was either too big or the team had no clue.

The reason Pawel and I and many other people like very small stories—size of 1—means that you deliver something every day or more often. You have transparency. You don’t invest a ton of work without getting feedback on the work.

The people I met a couple of weeks ago felt (and were) stuck. One guy was doing intricate SQL queries. He thought that there was no value until the entire query was done. Nope, that’s where he is incorrect. There is value in interim results. Why? How else would you debug the query? How else would you discover if you had the database set up correctly for product performance?

I suggested that every single atomic transaction was a valuable piece. That the way to build small stories was to separate this hairy SQL statement was at the atomic transaction. I bet there are other ways, but that was a good start. He got that aha look, so I am sure he will think of other ways.

Another guy was doing algorithm development. Now, I know one issue with algorithm development is you have to keep testing performance or reliability or something else when you do the development. Otherwise, you fall off the deep end. You have an algorithm tuned for one aspect of the system, but not another one. The way I’ve done this in the past is to support algorithm development with a variety of tests.

Testing Continuum from Manage It!This is the testing continuum from Manage It! Your Guide to Modern, Pragmatic Project Management. See the unit and component testing parts? If you do algorithm development, you need to test each piece of the algorithm—the inner loop, the next outer loop, repeat for each loop—with some sort of unit test, then component test, then as a feature. And, you can do system level testing for the algorithm itself.

Back when I tested machine vision systems, I was the system tester for an algorithm we wanted to go “faster.” I created the golden master tests and measured the performance. I gave my tests to the developers. Then, as they changed the inner loops, they created their own unit tests. (No, we were not smart enough to do test-driven development. You can be.) I helped create the component-level tests for the next-level-up tests. We could run each new potential algorithm against the golden master and see if the new algorithm was better or not.

I realize that you don’t have a product until everything works. This is like saying in math that you don’t have an answer until you have the finished the entire calculation. And, you are allowed—in fact, I encourage you—to show your interim work. How else can you know if you are making progress?

Another participant said that he was special. (Each and every one of you is special. Don’t you know that by now??) He was doing firmware development. I asked if he simulated the firmware before he downloaded to the device. “Of course!” he said. “So, simulate in smaller batches,” I suggested. He got that far-off look. You know that look, the one that says, “Why didn’t I think of that?”

He didn’t think of it because it requires changes to their simulator. He’s not an idiot. Their simulator is built for an entire system, not small batches. The simulator assumes waterfall, not agile. They have some technical debt there.

Here are the three ways, in case you weren’t clear:

  1. Use atomic transactions as a way to show value when you have a big honking transaction. Use tests for each atomic transaction to support your work and understand if you have the correct performance on each transaction.
  2. Break apart algorithm development, as in “show your work.” Support your algorithm development with tests, especially if you have loops.
  3. Simulate in small batches when you have hardware or firmware. Use tests to support your work.

You want to deliver value in your projects. Short stories allow you to do this. Long stories stop your momentum. The longer your project, and the more teams (if you work on a program), the more you need to keep your stories short. Try these alternatives.

Do you have other scenarios I haven’t discussed? Ask away in the comments.

Categories: Project Management

Launchpad Online for developers: Getting Started with Google APIs

Google Code Blog - Wed, 11/19/2014 - 19:00
Google APIs give you the power to build rich integrations with our most popular applications, including Google Maps, YouTube, Gmail, Google Drive and more. If you are new to our developer features, we want to give you a quick jumpstart on how to use them effectively and build amazing things.

As you saw last week, we’re excited to partner with the Startup Launch team to create the “Launchpad Online”, a new video series for a global audience that joins their cadre which already includes the established “Root Access” and “How I:” series, as well as a “Global Spotlight” series on some of the startups that have benefitted from the Startup Launch program. This new series focuses on developers who are beginners to one or more Google APIs. Each episode helps get new users off the ground with the basics, shows experienced developers more advanced techniques, or introduces new API features, all in as few lines of code as possible.

The series assumes you have some experience developing software and are comfortable using languages like Javascript and Python. Be inspired by a short working example that you can drop into your own app and customize as desired, letting you “take off” with that great idea. Hopefully you had a chance to view our intro video last week and are ready for more!

Once you’ve got your developers ready to get started, learn how to use the Google Developers Console to set up a project (see video to the right), and walk through common security code required for your app to access Google APIs. When you’ve got this “foundation” under your belt, you’ll be ready to build your first app… listing your files on Google Drive. You’ll learn something new and see some code that makes it happen in each future episode… we look forward to having you join us soon on Launchpad Online!

Posted by Wesley Chun, Developer Advocate, Google

+Wesley Chun (@wescpy) is an engineer, author of the bestselling Core Python books, and a Developer Advocate at Google, specializing in Google Drive, Google Apps Script, Gmail, Google Apps, cloud computing, developer training, and academia. He loves traveling worldwide to meet Google users everywhere, whether at a developers conference, user group meeting, or on a university campus!
Categories: Programming

We are leaving 3x-4x performance on the table just because of configuration.

Performance guru Martin Thompson gave a great talk at Strangeloop: Aeron: Open-source high-performance messaging, and one of the many interesting points he made was how much performance is being lost because were aren't configuring machines properly.

This point comes on the observation that "Loss, throughput, and buffer size are all strongly related."

Here's a gloss of Martin's reasoning. It's a problem that keeps happening and people aren't aware that it's happening because most people are not aware of how to tune network parameters in the OS.

The separation of programmers and system admins has become an anti-pattern. Developers don’t talk to the people who have root access on machines who don’t talk to the people that have network access. Which means machines are never configured right, which leads to a lot of loss. We are leaving 3x-4x performance on the table just because of configuration.

We need to workout how to bridge that gap, know what the parameters are, and how to fix them.

So know your OS network parameters and how to tune them.

Related Articles
Categories: Architecture

What is a Monolith?

Coding the Architecture - Simon Brown - Wed, 11/19/2014 - 10:00

There is currently a strong trend for microservice based architectures and frequent discussions comparing them to monoliths. There is much advice about breaking-up monoliths into microservices and also some amusing fights between proponents of the two paradigms - see the great Microservices vs Monolithic Melee. The term 'Monolith' is increasingly being used as a generic insult in the same way that 'Legacy' is!

However, I believe that there is a great deal of misunderstanding about exactly what a 'Monolith' is and those discussing it are often talking about completely different things.

A monolith can be considered an architectural style or a software development pattern (or anti-pattern if you view it negatively). Styles and patterns usually fit into different Viewtypes (a viewtype is a set, or category, of views that can be easily reconciled with each other [Clements et al., 2010]) and some basic viewtypes we can discuss are:

  • Module - The code units and their relation to each other at compile time.
  • Allocation - The mapping of the software onto its environment.
  • Runtime - The static structure of the software elements and how they interact at runtime.

A monolith could refer to any of the basic viewtypes above.


Module Monolith

If you have a module monolith then all of the code for a system is in a single codebase that is compiled together and produces a single artifact. The code may still be well structured (classes and packages that are coherent and decoupled at a source level rather than a big-ball-of-mud) but it is not split into separate modules for compilation. Conversely a non-monolithic module design may have code split into multiple modules or libraries that can be compiled separately, stored in repositories and referenced when required. There are advantages and disadvantages to both but this tells you very little about how the code is used - it is primarily done for development management.

Module Monolith


Allocation Monolith

For an allocation monolith, all of the code is shipped/deployed at the same time. In other words once the compiled code is 'ready for release' then a single version is shipped to all nodes. All running components have the same version of the software running at any point in time. This is independent of whether the module structure is a monolith. You may have compiled the entire codebase at once before deployment OR you may have created a set of deployment artifacts from multiple sources and versions. Either way this version for the system is deployed everywhere at once (often by stopping the entire system, rolling out the software and then restarting).

A non-monolithic allocation would involve deploying different versions to individual nodes at different times. This is again independent of the module structure as different versions of a module monolith could be deployed individually.

Allocation Monolith


Runtime Monolith

A runtime monolith will have a single application or process performing the work for the system (although the system may have multiple, external dependencies). Many systems have traditionally been written like this (especially line-of-business systems such as Payroll, Accounts Payable, CMS etc).

Whether the runtime is a monolith is independent of whether the system code is a module monolith or not. A runtime monolith often implies an allocation monolith if there is only one main node/component to be deployed (although this is not the case if a new version of software is rolled out across regions, with separate users, over a period of time).

Runtime Monolith

Note that my examples above are slightly forced for the viewtypes and it won't be as hard-and-fast in the real world.

Conclusion

Be very carefully when arguing about 'Microservices vs Monoliths'. A direct comparison is only possible when discussing the Runtime viewtype and properties. You should also not assume that moving away from a Module or Allocation monolith will magically enable a Microservice architecture (although it will probably help). If you are moving to a Microservice architecture then I'd advise you to consider all these viewtypes and align your boundaries across them i.e. don't just code, build and distribute a monolith that exposes subsets of itself on different nodes.

Categories: Architecture