Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Agile User Acceptance Testing Spans The Entire Product Development Life Cycle

Agile re-defines acceptance testing as a “formal description of the behavior of a software product[1].”

Agile re-defines acceptance testing as a “formal description of the behavior of a software product[1].”

User acceptance test (UAT) is a process that confirms that the output of a project meets the business needs and requirements. Classically, UAT would happen at the end of a project or release. Agile spreads UAT across the entire product development life cycle re-defining acceptance testing as a “formal description of the behavior of a software product[1].” By redefining acceptance testing as a description of what the software does (or is supposed to do) that can be proved (the testing part), Agile makes acceptance more important than ever by making it integral across the entire Agile life cycle.

Agile begins the acceptance testing process as requirements are being discovered. Acceptance tests are developed as part of the of the requirements life cycle in an Agile project because acceptance test cases are a form of requirements in their own right. The acceptance tests are part of the overall requirements, adding depth and granularity to the brevity of the classic user story format (persona, goal, benefit). Just like user stories, there is often a hierarchy of granularity from an epic to a user story. The acceptance tests that describe a feature or epic need to be decomposed in lock step with the decomposition of features and epics into user stories. Institutionalizing the process of generating acceptance tests at the feature and epic level and then breaking the stories and acceptance test cases down as part of grooming is a mechanism to synchronize scaled projects (we will dive into greater detail on this topic in a later entry).

As stories are accepted into sprints and development begins, acceptance test cases become a form of executable specifications. Because the acceptance test describes what the user wants the system to do, then the functionality of the code can be compared to the expected outcome of the acceptance test case to guide the developer.

When development of user stories is done the acceptance test cases provide a final feedback step to prove completion. The output of acceptance testing is a reflection of functional testing that can be replicated as part of the demo process. Typically, acceptance test cases are written by users (often Product Owners or subject matter experts) and reflect what the system is supposed to do for the business. Ultimately, it provides proof to the user community that the team (or teams) are delivering what is expected.

As one sprint follows another, the acceptance test cases from earlier sprints are often recast as functional regression tests cases in later sprints.

Agile user acceptance testing is a direct reflection of functional specifications that guide coding, provide basis for demos and finally, ensure that later changes don’t break functions that were develop and accepted in earlier sprints. UAT in an Agile project is more rigorous and timely than the classic end of project UAT found in waterfall projects.

[1] http://guide.agilealliance.org/guide/acceptance.html, September 2015


Categories: Process Management

Sponsored Post: Microsoft , Librato, Surge, Redis Labs, Jut.io, VoltDB, Datadog, MongoDB, SignalFx, InMemory.Net, Couchbase, VividCortex, MemSQL, Scalyr, AiScaler, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Microsoft’s Visual Studio Online team is building the next generation of software development tools in the cloud out in Durham, North Carolina. Come help us build innovative workflows around Git and continuous deployment, help solve the Git scale problem or help us build a best-in-class web experience. Learn more and apply.

  • VoltDB's in-memory SQL database combines streaming analytics with transaction processing in a single, horizontal scale-out platform. Customers use VoltDB to build applications that process streaming data the instant it arrives to make immediate, per-event, context-aware decisions. If you want to join our ground-breaking engineering team and make a real impact, apply here.  

  • At Scalyr, we're analyzing multi-gigabyte server logs in a fraction of a second. That requires serious innovation in every part of the technology stack, from frontend to backend. Help us push the envelope on low-latency browser applications, high-speed data processing, and reliable distributed systems. Help extract meaningful data from live servers and present it to users in meaningful ways. At Scalyr, you’ll learn new things, and invent a few of your own. Learn more and apply.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • Surge 2015. Want to mingle with some of the leading practitioners in the scalability, performance, and web operations space? Looking for a conference that isn't just about pitching you highly polished success stories, but that actually puts an emphasis on learning from real world experiences, including failures? Surge is the conference for you.

  • Your event could be here. How cool is that?
Cool Products and Services
  • Librato, a SolarWinds Cloud company, is a hosted monitoring platform for real-time operations and performance analytics. Easily add metrics from any source using turnkey solutions such as the AWS Cloudwatch integration, or by leveraging any of over 100 open source collection agents and language bindings. Librato is loved equally by DevOps and data engineers. Start using Librato today. Full-featured and free for 30 days.

  • MongoDB Management Made Easy. Gain confidence in your backup strategy. MongoDB Cloud Manager makes protecting your mission critical data easy, without the need for custom backup scripts and storage. Start your 30 day free trial today.

  • In a recent benchmark for NoSQL databases on the AWS cloud, Redis Labs Enterprise Cluster's performance had obliterated Couchbase, Cassandra and Aerospike in this real life, write-intensive use case. Full backstage pass and and all the juicy details are available in this downloadable report.

  • Real-time correlation across your logs, metrics and events.  Jut.io just released its operations data hub into beta and we are already streaming in billions of log, metric and event data points each day. Using our streaming analytics platform, you can get real-time monitoring of your application performance, deep troubleshooting, and even product analytics. We allow you to easily aggregate logs and metrics by micro-service, calculate percentiles and moving window averages, forecast anomalies, and create interactive views for your whole organization. Try it for free, at any scale.

  • In a recent benchmark conducted on Google Compute Engine, Couchbase Server 3.0 outperformed Cassandra by 6x in resource efficiency and price/performance. The benchmark sustained over 1 million writes per second using only one-sixth as many nodes and one-third as many cores as Cassandra, resulting in 83% lower cost than Cassandra. Download Now.

  • Datadog is a monitoring service for scaling cloud infrastructures that bridges together data from servers, databases, apps and other tools. Datadog provides Dev and Ops teams with insights from their cloud environments that keep applications running smoothly. Datadog is available for a 14 day free trial at datadoghq.com.

  • Turn chaotic logs and metrics into actionable data. Scalyr replaces all your tools for monitoring and analyzing logs and system metrics. Imagine being able to pinpoint and resolve operations issues without juggling multiple tools and tabs. Get visibility into your production systems: log aggregation, server metrics, monitoring, intelligent alerting, dashboards, and more. Trusted by companies like Codecademy and InsideSales. Learn more and get started with an easy 2-minute setup. Or see how Scalyr is different if you're looking for a Splunk alternative or Sumo Logic alternative.

  • SignalFx: just launched an advanced monitoring platform for modern applications that's already processing 10s of billions of data points per day. SignalFx lets you create custom analytics pipelines on metrics data collected from thousands or more sources to create meaningful aggregations--such as percentiles, moving averages and growth rates--within seconds of receiving data. Start a free 30-day trial!

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex goes beyond monitoring and measures the system's work on your MySQL and PostgreSQL servers, providing unparalleled insight and query-level analysis. This unique approach ultimately enables your team to work more effectively, ship more often, and delight more customers.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Also available on Amazon Web Services. Free instant trial, 2 hours of FREE deployment support, no sign-up required. http://aiscaler.com

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

Budget When You Can’t Estimate

Mike Cohn's Blog - 12 hours 6 min ago

I've written before that we should only estimate if having the estimate will change someone's actions. Normally, this means that the team should estimate work only if having the estimate will either:

  • enable the product owner to better prioritize the product backlog, or
  • let the product owner answer questions about when some amount of functionality can be available.

But, even if we only ask for estimates at these times, there will be some work that the team finds extremely hard or nearly impossible to estimate. Let's look at the best approach when an agile team is asked to estimate work they don't think they can reasonably estimate. But let's start with some of the reasons why an agile team may not be able to estimate well.

Why a Team Might Not Be Able to Estimate Well

This might occur for a variety of reasons. It happens frequently when a team first begins working in a new domain or with new technology.

Take a set of highly experienced team members who have worked together for years building, let's say, healthcare applications for the Web. And then move them to building banking applications for mobile devices. They can come up with some wild guesses about that new mobile banking app. But that's about all that estimate would be--a wild guess.

Similarly, team members can have a hard time estimating when they aren't really much of a team yet. If seven people are thrown together today for the first time and immediately asked to estimate a product backlog, their estimates will suck. Individuals won't know who has which skills (as opposed to who says they have those skills). They won't know how well they collaborate, and so on. After such a team works together for perhaps a few months, or maybe even just a few weeks, the quality of their estimates should improve dramatically.

There can also be fundamental reasons why an estimate is hard to produce. Some work is hard to estimate. For example, how long until cancer is cured? Or, how long will it take to design (or architect) this new system? Teams hate being asked to estimate this kind of work. In some cases, it's done when it's done, as in the cancer example. In other cases, it's done when time runs out, as in the design or architecture example.

Budgeting Instead of Estimating

In cases like these, the best thing is to approach the desire for an estimate from a different direction. Instead of asking a team, "How long will this take?" the business thinks, "How much time is this worth?" In effect, what this does is create a budget for the feature, rather than an estimate. Creating a budget for a feature is essentially applying the agile practice of timeboxing. The business says, "This feature is worth four iterations to us." In a healthy, mature agile organization, the team should be given a chance to say if they think they can do a reasonable job in that amount of time. If the team does not think it can build something the business will value in the budgeted amount of time, a discussion should ensue. What if the budget were expanded by another sprint or two? Can portions of the feature be considered optional so that the mandatory features are delivered within the budget? Establishing a budget frames this type of discussion.

What Do You Think?

What do you think of this approach? Are there times when you'd find it more useful to put a budget or timebox on a feature rather than estimate it? Have you done this in the past? I'd love to know your thoughts in the comments below.

30 Hot Books for Your Backlog (September)

NOOP.NL - Jurgen Appelo - 12 hours 39 min ago
bookshelf

30 Hot Books for Your Backlog… These are my 30 personal reading tips for September!

The post 30 Hot Books for Your Backlog (September) appeared first on NOOP.NL.

Categories: Project Management

Docker and Containers: Coffee With A Googler meets Brian Dorsey

Google Code Blog - Mon, 08/31/2015 - 22:15

Posted by Laurence Moroney, Developer Advocate

If you’ve worked with Web or cloud tech over the last 18 months, you’ll have heard about Containers and about how they let you spend more time on building software, instead of managing infrastructure. In this episode of Coffee with a Googler, we chat with Brian Dorsey into the benefits of using Containers in Google Cloud Platform for simplifying infrastructure management.

Important discussion topics covered in this episode include:

  • Containers improve the developer experience. Regardless of how large the final deployment is, they are there to make it easier for you to succeed.
  • Kubernetes -- an open source project to allow you to manage containers and fleets of containers.

Brian shares an example from Julia Ferraioli who used Containers (with Docker) to configure a Minecraft server, with many plugins, and Kubernetes to manage it.

You can learn more about Google Cloud platform, including Docker and Kubernetes at the Google Cloud Platform site.

Categories: Programming

Go Ahead, Call It a Comeback

DevHawk - Harry Pierson - Mon, 08/31/2015 - 21:55

It's been a looooong time, but I finally got around to geting DevHawk back online. It's hard to believe that it's been over a year since my last post. Lots has happened in that time!

First off, I've changed jobs (again). Last year, I made the switch from program manager to dev. Unfortunately, the project I was working on was cancelled. After several months in limbo, I was reorganized into the .NET Core framework team back over in DevDiv. I've got lots of friends in DevDiv and love the open source work they are doing. But I really missed being in Windows. Earlier this year, I joined the team that builds the platform plumbing for SmartGlass. Not much to talk about publicly right now, but that will change sometime soon.

In addition to my day job in SmartGlass, I'm also pitching in to help the Microsoft Services Disaster Response team. I knew Microsoft has a long history of corporate giving. However, I was unaware of the work we do helping communities affected by natural disasters until recently. My good friend Lewis Curtis took over as Director of Microsoft Services Disaster Response last year. I'm currently helping out on some of the missions for Nepal in response to the devestating earthquake that hit there earlier this year.

Finally, I decided that I was tired of running Other Peoples Codetm on my website. So I built out a new blog engine called Hawk. It's written in C# (plus about 30 lines of JavaScript), uses ASP.NET 5 and runs on Azure. It's specifically designed for my needs - for example, it automatically redirects old DasBlog style links like http://devhawk.net/2005/10/05/code+is+model.aspx. But I'm happy to let other people use it and would welcome contributions. When I get a chance, I'll push the code up to GitHub.

Categories: Architecture, Programming

Making Amazon ECS Container Service as easy to use as Docker run

Xebia Blog - Mon, 08/31/2015 - 20:52

One of the reasons Docker caught fire was that it was soo easy to use. You could build and start a docker container in a matter of seconds. With Amazon ECS this is not so. You have to learn a whole new lingo (Clusters, Task definitions, Services and Tasks), spin up an ECS cluster, write a nasty looking JSON file or wrestle with a not-so-user-friendly UI before you have your container running in ECS.

In the blog we will show you that Amazon ECS can be as fast, by presenting you a small utility named ecs-docker-run which will allow you to start a Docker container almost as fast as with Docker stand-alone by interpreting the Docker run command line options. Together with a ready-to-run CloudFormation template, you can be up and running with Amazon ECS within minutes!

ECS Lingo

Amazon ECS uses different lingo than Docker people, which causes confusion. Here is a short translation:

- Cluster - one or more Docker Hosts.
- Task Definition - A JSON representation of a docker run command line.
- Task - A running docker instance. When the instance stops, the task is finished.
- Service - A running docker instance, when it stops, it is restarted.

In the basis that is all there is too it. (Cutting a few corners and skimping on a number of details).

Once you know this, we are ready to use ecs-docker-run.

ECS Docker Run

ecs-docker-run is a simple command line utility to run docker images on Amazon ECS. To use this utility you can simply type something familiar like:

ecs-docker-run \
        --name paas-monitor \
        --env SERVICE_NAME=paas-monitor \
        --env SERVICE_TAGS=http \
        --env "MESSAGE=Hello from ECS task" \
        --env RELEASE=v10 \
        -P  \
        mvanholsteijn/paas-monitor:latest

substituting the 'docker run' with 'ecs-docker-run'.

Under the hood, it will generate a task definition and start a container as a task on the ECS cluster. All of the following Docker run command line options are functionally supported.

-P publishes all ports by pulling and inspecting the image. --name the family name of the task. If unspecified the name will be derived from the image name. -p add a port publication to the task definition. --env set the environment variable. --memory sets the amount of memory to allocate, defaults to 256 --cpu-shares set the share cpu to allocate, defaults to 100 --entrypoint changes the entrypoint for the container --link set the container link. -v set the mount points for the container. --volumes-from set the volumes to mount.

All other Docker options are ignored as they refer to possibilities NOT available to ECS containers. The following options are added, specific for ECS:

--generate-only will only generate the task definition on standard output, without starting anything. --run-as-service runs the task as service, ECS will ensure that 'desired-count' tasks will keep running. --desired-count specifies the number tasks to run (default = 1). --cluster the ECS cluster to run the task or service (default = cluster). Hands-on!

In order to proceed with the hands-on part, you need to have:

- jq installed
- aws CLI installed (version 1.7.44 or higher)
- aws connectivity configured
- docker connectivity configured (to a random Docker daemon).

checkout ecs-docker-run

Get the ecs-docker-run sources by typing the following command:

git clone git@github.com:mvanholsteijn/ecs-docker-run.git
cd ecs-docker-run/ecs-cloudformation
import your ssh key pair

To look around on the ECS Cluster instances, import your public key into Amazon EC2, using the following command:

aws ec2 import-key-pair \
          --key-name ecs-$USER-key \
          --public-key-material  "$(ssh-keygen -y -f ~/.ssh/id_rsa)"
create the ecs cluster autoscaling group

In order to create your first cluster of 6 docker Docker Hosts, type the following command:

aws cloudformation create-stack \
        --stack-name ecs-$USER-cluster \
        --template-body "$(<ecs.json)"  \
        --capabilities CAPABILITY_IAM \
        --parameters \
                ParameterKey=KeyName,ParameterValue=ecs-$USER-key \
                ParameterKey=EcsClusterName,ParameterValue=ecs-$USER-cluster

This cluster is based upon the firstRun cloudformation definition, which is used when you follow the Amazon ECS wizard.

And wait for completion...

Wait for completion of the cluster creation, by typing the following command:

function waitOnCompletion() {
        STATUS=IN_PROGRESS
        while expr "$STATUS" : '^.*PROGRESS' > /dev/null ; do
                sleep 10
                STATUS=$(aws cloudformation describe-stacks \
                               --stack-name ecs-$USER-cluster | jq -r '.Stacks[0].StackStatus')
                echo $STATUS
        done
}

waitOnCompletion
Create the cluster

Unfortunately, CloudFormation does (not) yet allow you to specify the ECS cluster name, so need to manually create the ECS cluster, by typing the following command:

aws ecs create-cluster --cluster-name ecs-$USER-cluster

You can now manage your hosts and tasks from the Amazon AWS EC2 Container Services console.

Run the paas-monitor

Finally, you are ready to run any docker image on ECS! Type the following command to start the paas-monitor.

../bin/ecs-docker-run --run-as-service \
                        --number-of-instances 3 \
                        --cluster ecs-$USER-cluster \
                        --env RELEASE=v1 \
                        --env MESSAGE="Hello from ECS" \
                        -p :80:1337 \
                        mvanholsteijn/paas-monitor
Get the DNS name of the Elastic Load Balancer

To see the application in action, you need to obtain the DNS name of the Elastic Load Balancer. Type the following commands:

# Get the Name of the ELB created by CloudFormation
ELBNAME=$(aws cloudformation describe-stacks --stack-name ecs-$USER-cluster | \
                jq -r '.Stacks[0].Outputs[] | select(.OutputKey =="EcsElbName") | .OutputValue')

# Get the DNS from of that ELB
DNSNAME=$(aws elb describe-load-balancers --load-balancer-names $ELBNAME | \
                jq -r .LoadBalancerDescriptions[].DNSName)
Open the application

Finally, we can obtain access to the application.

open http://$DNSNAME

And it should look something like this..

host release message # of calls avg response time last response time

b6ee7869a5e3:1337 v1 Hello from ECS from release v1; server call count is 82 68 45 36

4e09f76977fe:1337 v1 Hello from ECS from release v1; server call count is 68 68 41 38 65d8edd41270:1337 v1 Hello from ECS from release v1; server call count is 82 68 40 37

Perform a rolling upgrade

You can now perform a rolling upgrade of your application, by typing the following command while keeping your web browser open at http://$DNSNAME:

../bin/ecs-docker-run --run-as-service \
                        --number-of-instances 3 \
                        --cluster ecs-$USER-cluster \
                        --env RELEASE=v2 \
                        --env MESSAGE="Hello from Amazon EC2 Container Services" \
                        -p :80:1337 \
                        mvanholsteijn/paas-monitor

The result should look something like this:

host release message # of calls avg response time last response time b6ee7869a5e3:1337 v1 Hello from ECS from release v1; server call count is 124 110 43 37 4e09f76977fe:1337 v1 Hello from ECS from release v1; server call count is 110 110 41 35 65d8edd41270:1337 v1 Hello from ECS from release v1; server call count is 124 110 40 37 ffb915ddd9eb:1337 v2 Hello from Amazon EC2 Container Services from release v2; server call count is 43 151 9942 38 8324bd94ce1b:1337 v2 Hello from Amazon EC2 Container Services from release v2; server call count is 41 41 41 38 7b8b08fc42d7:1337 v2 Hello from Amazon EC2 Container Services from release v2; server call count is 41 41 38 39

Note how the rolling upgrade is a bit crude. The old instances stop receiving requests almost immediately, while all requests seem to be loaded onto the first new instance.

You do not like the ecs-docker-run script?

If you do not like the ecs-docker-run script, do not dispair. Below are the equivalent Amazon ECS commands to do it without the hocus-pocus script...

Create a task definition

This is the most difficult task: Manually creating a task definition file called 'manual-paas-monitor.json' with the following content:

{
  "family": "manual-paas-monitor",
  "containerDefinitions": [
    {
      "volumesFrom": [],
      "portMappings": [
        {
          "hostPort": 80,
          "containerPort": 1337
        }
      ],
      "command": [],
      "environment": [
        {
          "name": "RELEASE",
          "value": "v3"
        },
        {
          "name": "MESSAGE",
          "value": "Native ECS Command Line Deployment"
        }
      ],
      "links": [],
      "mountPoints": [],
      "essential": true,
      "memory": 256,
      "name": "paas-monitor",
      "cpu": 100,
      "image": "mvanholsteijn/paas-monitor"
    }
  ],
  "volumes": []
}
Register the task definition

Before you can start a task it has to be registered at ECS, by typing the following command:

aws ecs register-task-definition --cli-input-json "$(<paas-monitor.json)"
Start a service

Now start a service based on this definition, by typing the following command:

aws ecs create-service \
     --cluster ecs-$USER-cluster \
     --service-name manual-paas-monitor \
     --task-definition manual-paas-monitor:1 \
     --desired-count 1

You should see a new row appear in your browser:

host release message # of calls avg response time last response time .... 5ec1ac73100f:1337 v3 Native ECS Command Line Deployment from release v3; server call count is 37 37 37 36 Conclusion

Amazon EC2 Container Services has a higher learning curve than using plain Docker. You need to get passed the lingo, the creation of an ECS cluster on Amazon EC2 and most importantly the creation of the cumbersome task definition file. After that it is almost as easy to use as Docker run.

In return you get all the goodies from Amazon like Autoscaling groups, Elastic Load Balancers and multi-availability zone deployments ready to use in your Docker applications. So, check ECS out!

More Info

Checkout more information:

Hungry for some Big Android BBQ?

Android Developers Blog - Mon, 08/31/2015 - 18:11

Posted by Colt McAnlis, Head Performance Wrangler

The Big Android BBQ (BABBQ) is almost here and Google Developers will be there serving up a healthy portion of best practices for Android development and performance! BABBQ will be held at the Hurst Convention Center in Dallas/Ft.Worth, Texas on October 22-23, 2015.

We also have some great news! If you sign up for the event through August 25th, you will get 25% off when you use the promotional code "ANDROIDDEV25". You can also click here to use the discount.

Now, sit back, and enjoy this video of some Android cowfolk preparing for this year’s BBQ!

The Big Android BBQ is an Android combo meal with a healthy serving of everything ranging from the basics, to advanced technical dives, and best practices for developers smothered in a sweet sauce of a close knit community.

This year, we are packing in an unhealthy amount of Android Performance Patterns, followed up with the latest and greatest techniques and APIs from the Android 6.0 Marshmallow release. It’s all rounded out with code labs to let you get hands-on learning. To super-size your meal, Android Developer instructors from Udacity will be on-site to guide users through the Android Nanodegree. (Kinda like a personal-waiter at an all-you-can-learn buffet).

Also, come watch Colt McAnlis defend his BABBQ “Speechless” Crown against Silicon Valley reigning champ Chet Haase. It'll be a fist fight of humor in the heart of Texas!

You can get your tickets here, and we look forward to seeing you in October!

Join the discussion on

+Android Developers
Categories: Programming

Games developer, Dots, share their Do’s and Don’ts for improving your visibility on Google Play

Android Developers Blog - Mon, 08/31/2015 - 18:06

Posted by Lily Sheringham, Developer Marketing at Google Play

Editor’s note: A few weeks ago we shared some tips from game developer, Seriously, on how they’ve been using notifications successfully to drive ongoing engagement. This week, we’re sharing tips from Christian Calderon at US game developer, Dots, on how to successfully optimize your Play Store Listing. -Ed.

A well thought-out Google Play store listing can significantly improve the discoverability of your app or game and drive installations. With the recent launch of Store Listing Experiments on the Google Play Developer Console, you can now conduct A/B tests on the text and graphics of your store listing page and use the data to make more informed decisions.

Dots is a US-founded game developer which released the popular game, Dots, and its addictive sequel, TwoDots. Dots used its store listings to showcase its brands and improve conversions by letting players know what to expect.

Christian Calderon, Head of Marketing for Dots, shared his top tips with us on store listings and visibility on Google Play.


Do’s and Don’ts for optimizing store listings on Google Play table, th, td { border: clear; border-collapse: collapse; }
Do’s

Don’ts

Do be creative and unique with the icon. Try to visually convince the user that your product is interesting and in alignment with what they are looking for.

Don’t spam keywords in your app title. Keep the title short, original and thoughtful and keep your brand in mind when representing your product offering.

Do remember to quickly respond to reviews and implement a scalable strategy to incorporate feedback into your product offering. App ratings are important social proof that your product is well liked.

Don’t overload the ‘short description’. Keep it concise. It should be used as a call-to-action to address your product’s core value proposition and invite the user to install the application. Remember to consider SEO best practices.

Do invest in a strong overall paid and organic acquisition strategy. More downloads will make your product seem more credible to users, increasing the likeliness that a user will install your app.

Don’t overuse text in your screenshots. They should create a visual narrative for what’s in your game and help users visualize your product offering, using localization where possible.

Do link your Google Play store listing to your website, social media accounts, press releases and any of your consumer-facing channels that may drive organic visibility to your target market. This can impact your search positioning.

Don’t have a negative, too short or confusing message in your “What’s New” copy. Let users know what updates, product changes or bug fixes have been implemented in new versions. Keep your copy buoyant, informative, concise and clear.

Do use Video Visualization to narrate the core value proposition. For TwoDots, our highest converting videos consist of gameplay, showcasing features and events within the game that let the player know exactly what to expect.

Don’t flood the user with information in the page description. Keep the body of the page description organized and concise and test different structural patterns that works best for you and your product!


Use Google Play Store Listing Experiments to increase your installs

As part of the 100 Days of Google Dev video series, Kobi Glick from the Google Play team explains how to test different graphics and text on your app or game’s Play Store listing to increase conversions using the new Store Listing Experiments feature in the Developer Console.

Find out more about using Store Listing Experiments to turn more of your visits into installs.

Categories: Programming

The Art of Visualising Software Architecture

Coding the Architecture - Simon Brown - Mon, 08/31/2015 - 17:23

As you may have seen on Twitter, I've been mulling over an idea for a new book, which I'm pleased to say is going to happen. It's currently titled "The Art of Visualising Software Architecture" and, as the title suggests, it will focus on the visual communication of software architecture through diagrams.

The Art of Visualising Software Architecture

The core of the book is my C4 software architecture model and although this is covered in my existing Software Architecture for Developers book, I want to create a single resource related to this topic because I still see effective communication of software architecture as a huge gap across the software development industry. You'll notice that the title of this book includes the word "art". I've seen a number of debates over the years about whether software development is a craft or an engineering discipline. Although I think it *should* be an engineering discipline, I believe we're a number of years away from this being a reality. So while this book won't present a formalised, standardised method to communicate software architecture, it will provide a collection of ideas and techniques that thousands of people across the world have found useful.

I also want to include a number of other topics and answers to frequently asked questions that I get during my software architecture sketching workshops, including some of the blog posts I've written recently such as Help, my diagram doesn't fit on one page! and Diff'ing software architecture diagrams again, for example. I'm also going to include more discussion about notation, the various uses for diagrams, the value of creating a model and tooling. Structurizr will be in there too.

Thanks very much for all of the support so far on this; the tweets/e-mails I've had are telling me that this is the right decision. Since it's not going to be a long book and initial drafts may include some text copied verbatim from my "Software Architecture for Developers book", I'm going to make this available via Leanpub using their variable pricing model ... with a starting price of free, certainly for a while anyway. It's a work in progress, but please feel free to grab a copy from Leanpub if you're interested. Thanks very much!

Categories: Architecture

Building Globally Distributed, Mission Critical Applications: Lessons From the Trenches Part 1

This is Part 1 of a guest post by Kris Beevers, founder and CEO, NSONE, a purveyor of a next-gen intelligent DNS and traffic management platform.

Every tech company thinks about it: the unavoidable – in fact, enviable – challenge of scaling its applications and systems as the business grows. How can you think about scaling from the beginning, and put your company on good footing, without optimizing prematurely? What are some of the key challenges worth thinking about now, before they bite you later on? When you’re building mission critical technology, these are fundamental questions. And when you’re building a distributed infrastructure, whether for reliability or performance or both, they’re hard questions to answer.

Putting the right architecture and processes in place will enable your systems and company to withstand the common hiccups distributed, high traffic applications face. This enables you to stay ahead of scaling constraints, manage inevitable network and system failures, stay calm and debug production issues in real-time, and grow your company and product successfully.

Who is this guy?

I’ve been building globally distributed, large scale applications for a long time.  Way back in the first dot-com boom, I bailed on college classes for a year and built backend infrastructure for a file-sharing startup which grew to millions of users – until the RIAA’s lawyers caught wind and sent us packing back to our dorm rooms. The business went bust, but I was hooked on scale.

More recently, at Voxel, an internet infrastructure provider that was acquired by Internap in 2011, I built global internet infrastructure used by many large web companies – we built globally distributed public cloud, bare metal as-a-service, content delivery networks, and much more. We learned a lot of scaling lessons, and we learned them the hard way.

Now, at NSONE, we’ve built a next-gen intelligent DNS and traffic management platform, which today services some of the largest properties on the Internet, including many companies who are themselves mission critical service providers.  This is truly globally distributed, mission critical infrastructure, and the lessons we learned at Voxel have served us well – and been reinforced time and again – as we’ve built and scaled the NSONE platform.

It’s time to share some of what we’ve learned, and with luck, maybe you can apply some of these lessons in your own applications – instead of learning them the hard way!

Architecture first
Categories: Architecture

Software Development Conferences Forecast August 2015

From the Editor of Methods & Tools - Mon, 08/31/2015 - 14:53
Here is a list of software development related conferences and events on Agile project management ( Scrum, Lean, Kanban), software testing and software quality, software architecture, programming (Java, .NET, JavaScript, Ruby, Python, PHP) and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods & […]

Unlocking ES2015 features with Webpack and Babel

Xebia Blog - Mon, 08/31/2015 - 09:53

This post is part of a series of ES2015 posts. We'll be covering new JavaScript functionality every week for the coming two months.

After being in the working draft state for a long time, the ES2015 (formerly known as ECMAScript 6 or ES6 shorthand) specification has reached a definitive state a while ago. For a long time now, BabelJS, a Javascript transpiler, formerly known as 6to5, has been available for developers that would already like to use ES2015 features in their projects.

In this blog post I will show you how you can integrate Webpack, a Javascript module builder/loader, with Babel to automate the transpiling of ES2015 code to ES5. Besides that I'll also explain you how to automatically generate source maps to ease development and debugging.

Webpack Introduction

Webpack is a Javascript module builder and module loader. With Webpack you can pack a variety of different modules (AMD, CommonJS, ES2015, ...) with their dependencies into static file bundles. Webpack provides you with loaders which essentially allow you to pre-process your source files before requiring or loading them. If you are familiar with tools like Grunt or Gulp you can think of loaders as tasks to be executed before bundling sources. To make your life even easier, Webpack also comes with a development server with file watch support and browser reloading.

Installation

In order to use Webpack all you need is npm, the Node Package Manager, available by downloading either Node or io.js. Once you've got npm up and running all you need to do to setup Webpack globally is install it using npm:

npm install -g webpack

Alternatively, you can include it just in the projects of your preference using the following command:

npm install --save-dev webpack

Babel Introduction

With Babel, a Javascript transpiler, you can write your code using ES2015 (and even some ES7 features) and convert it to ES5 so that well-known browsers will be able to interpret it. On the Babel website you can find a list of supported features and how you can use these in your project today. For the React developers among us, Babel also comes with JSX support out of the box.

Alternatively, there is the Google Traceur compiler which essentially solves the same problem as Babel. There are multiple Webpack loaders available for Traceur of which traceur-loader seems to be the most popular one.

Installation

Assuming you already have npm installed, installing Babel is as easy as running:

npm install --save-dev babel-loader

This command will add babel-loader to your project's package.json. Run the following command if you prefer installing it globally:

npm install -g babel-loader

Project structure
webpack-babel-integration-example/
  src/
    DateTime.js
    Greeting.js
    main.js
  index.html
  package.json
  webpack.config.js

Webpack's configuration can be found in the root directory of the project, named webpack.config.js. The ES6 Javascript sources that I wish to transpile to ES5 will be located under the src/ folder.

Webpack configuration

The Webpack configuration file that is required is a very straightforward configuration or a few aspects:

  • my main source entry
  • the output path and bundle name
  • the development tools that I would like to use
  • a list of module loaders that I would like to apply to my source
var path = require('path');

module.exports = {
  entry: './src/main.js',
  output: {
    path: path.join(__dirname, 'build'),
    filename: 'bundle.js'
  },
  devtool: 'inline-source-map',
  module: {
    loaders: [
      {
        test: path.join(__dirname, 'src'),
        loader: 'babel-loader'
      }
    ]
  }
};

The snippet above shows you that my source entry is set to src/main.js, the output is set to create a build/bundle.js, I would like Webpack to generate inline source maps and I would like to run the babel-loader for all files located in src/.

ES6 sources A simple ES6 class

Greeting.js contains a simple class with only the toString method implemented to return a String that will greet the user:

class Greeting {
  toString() {
    return 'Hello visitor';
  }
}

export default Greeting
Using packages in your ES2015 code

Often enough, you rely on a bunch of different packages that you include in your project using npm. In my example, I'll use the popular date time library called Moment.js. In this example, I'll use Moment.js to display the current date and time to the user.

Run the following command to install Moment.js as a local dependency in your project:

npm install --save-dev moment

I have created the DateTime.js class which again only implements the toString method to return the current date and time in the default date format.

import moment from 'moment';

class DateTime {
  toString() {
    return 'The current date time is: ' + moment().format();
  }
}

export default DateTime

After importing the package using the import statement you can use it anywhere within the source file.

Your main entry

In the Webpack configuration I specified a src/main.js file to be my source entry. In this file I simply import both classes that I created, I target different DOM elements and output the toString implementations from both classes into these DOM objects.

import Greeting from './Greeting.js';
import DateTime from './DateTime.js';

var h1 = document.querySelector('h1');
h1.textContent = new Greeting();

var h2 = document.querySelector('h2');
h2.textContent = new DateTime();
HTML

After setting up my ES2015 sources that will display the greeting in an h1 tag and the current date time in an h2 tag it is time to setup my index.html. Being a straightforward HTML-file, the only thing that is really important is that you point the script tag to the transpiled bundle file, in this example being build/bundle.js.

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Webpack and Babel integration example</title>
</head>
<body>

<h1></h1>

<h2></h2>

    <script src="build/bundle.js"></script>
</body>
</html>
Running the application

In this example project, running my application is as simple as opening the index.html in your favorite browser. However, before doing this you will need to instruct Webpack to actually run the loaders and thus transpile your sources into the build/bundle.js required by the index.html.

You can run Webpack in watch mode, meaning that it will monitor your source files for changes and automatically run the module loaders defined in your configuration. Execute the following command to run in watch mode:

webpack --watch

If you are using my example project from Github (link at the bottom), you can also use the following script which I've set up in the package.json:

npm run watch

Easier debugging using source maps

Debugging transpiled ES5 is a huge pain which will make you want to go back to writing ES5 without thinking. To ease development and debugging of ES2015 I can rely on source maps generated by Webpack. While running Webpack (normal or in watch mode) with the devtool property set to inline-source-map you can view the ES2015 source files and actually place breakpoints in them using your browser's development tools.

Debugging ES6 with source maps

Running the example project with a breakpoint inside the DateTime.js toString method using the Chrome developer tools.

Conclusion

As you've just seen, setting up everything you need to get started with ES2015 is extremely easy. Webpack is a great utility that will allow you to easily set up your complete front-end build pipeline and seamlessly integrates with Babel to include code transpiling into the build pipeline. With the help of source maps even debugging becomes easy again.

Sample project

The entire sample project as introduced above can be found on Github.

SPaMCAST 357 – Mind Mapping, Who Needs Architects, TameFlow Chapter 4

Software Process and Measurement Cast - Sun, 08/30/2015 - 22:00

The Software Process and Measurement our essay on Mind Mapping. I view Mind Mapping as a great technique to stimulate creative thinking and organize ideas. Mind Mapping is one of my favorite tools.  Note:  this essay has a lot of visual examples, while there they are explained in the reading of the essay, I have also included them in a separate document for reference. 

We also feature Steve Tendon’s column discussing the TameFlow methodology and Chapter 4 of his great book, Hyper-Productive Knowledge Work Performance.  Steve and I discussed the impact of management’s profound understanding of knowledge work and that understandings impact on the delivery of value.

Anchoring the cast will be Gene Hughson, returning with an entry from his Form Follows Function blog.  This week Gene discusses the ideas in his entry, “Who Needs Architects?” 

Call to Action!

For the remainder of August and September let’s try something a little different.  Forget about iTunes reviews and tell a friend or a coworker about the Software Process and Measurement Cast. Let’s use word of mouth will help grow the audience for the podcast.  After all the SPaMCAST provides you with value, why keep it yourself?!

Re-Read Saturday News

Remember that the Re-Read Saturday of The Mythical Man-Month is in full swing.  This week we tackle the essay titled “Calling The Shot”!  Check out the new installment at Software Process and Measurement Blog.

Upcoming Events

Software Quality and Test Management 
September 13 – 18, 2015
San Diego, California
http://qualitymanagementconference.com/

I will be speaking on the impact of cognitive biases on teams.  Let me know if you are attending! If you are still deciding on attending let me know because I have a discount code.

 

Agile Development Conference East
November 8-13, 2015
Orlando, Florida
http://adceast.techwell.com/

I will be speaking on November 12th on the topic of Agile Risk. Let me know if you are going and we will have a SPaMCAST Meetup.

Next SPaMCAST

The next Software Process and Measurement feature our interview with Julia Wester.  Julia is an Improvement Coach at LeanKit and writes the Every Kanban Blog.  Julia and I had wide ranging conversation that included both process improvement and the role of management in Agile. Julia’s [WHAT?] is that management can play a constructive role in helping Agile teams deliver the most value.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

SPaMCAST 357 – Mind Mapping, Who Needs Architects, TameFlow Chapter 4

 www.spamcast.net

http://www.spamcast.net

Listen Now

Subscribe on iTunes

The Software Process and Measurement our essay on Mind Mapping. I view Mind Mapping as a great technique to stimulate creative thinking and organize ideas. Mind Mapping is one of my favorite tools.  Note:  this essay has a lot of visual examples, while there they are explained in the reading of the essay, I have also included them in a separate document for reference. 

If you want to reference the figures/examples in the essay download this PDF.  Mind Mapping Figures

We also feature Steve Tendon’s column discussing the TameFlow methodology and Chapter 4 of his great book, Hyper-Productive Knowledge Work Performance.  Steve and I discussed the impact of management’s profound understanding of knowledge work and that understandings impact on the delivery of value.

Anchoring the cast will be Gene Hughson, returning with an entry from his Form Follows Function blog.  This week Gene discusses the ideas in his entry, “Who Needs Architects?”

Call to Action!

For the remainder of August and September let’s try something a little different.  Forget about iTunes reviews and tell a friend or a coworker about the Software Process and Measurement Cast. Let’s use word of mouth will help grow the audience for the podcast.  After all the SPaMCAST provides you with value, why keep it yourself?!

Re-Read Saturday News

Remember that the Re-Read Saturday of The Mythical Man-Month is in full swing.  This week we tackle the essay titled “Calling The Shot”!  Check out the new installment at Software Process and Measurement Blog.

Upcoming Events

Software Quality and Test Management
September 13 – 18, 2015
San Diego, California
http://qualitymanagementconference.com/

I will be speaking on the impact of cognitive biases on teams.  Let me know if you are attending! If you are still deciding on attending let me know because I have a discount code.

Agile Development Conference East
November 8-13, 2015
Orlando, Florida
http://adceast.techwell.com/

I will be speaking on November 12th on the topic of Agile Risk. Let me know if you are going and we will have a SPaMCAST Meetup.

Next SPaMCAST

The next Software Process and Measurement feature our interview with Julia Wester.  Julia is an Improvement Coach at LeanKit and writes the Every Kanban Blog.  Julia and I had wide ranging conversation that included both process improvement and the role of management in Agile. Julia’s opinion is that management can play a constructive role in helping Agile teams deliver the most value.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

Xebia KnowledgeCast Episode 6: Lodewijk Bogaards on Stackstate and TypeScript

Xebia Blog - Sun, 08/30/2015 - 12:00

lodewijk
The Xebia KnowledgeCast is a podcast about software architecture, software development, lean/agile, continuous delivery, and big data.

In this 6th episode, we switch to a new format! So, no fun with stickies this time. It’s one interview. And we dive in deeper than ever before.

Lodewijk Bogaards is co-founder and CTO at Stackstate. Stackstate is an enterprise application that provides real time insight in the run-time status of all business processes, applications, services and infrastructure components, their dependencies and states.

It gives immediate insight into the cause and impact of any change or failure from the hardware level to the business process level and everything in between. To build awesome apps like that, Lodewijk and his team use a full-stack of technologies that they continue to improve. One such improvement is ditching plain Javascript in favor of TypeScript and Angular and it is that topic that we'll focus on in today's disussion.

What's your opinion on TypeScript? I'm looking forward to reading your comments!

Links:

Want to subscribe to the Xebia KnowledgeCast? Subscribe via iTunes, or Stitcher.

Your feedback is appreciated. Please leave your comments in the shownotes. Better yet, send in a voice message so we can put you ON the show!

Credits

Calling The Shot Re-Read Saturday: The Mythical Man-Month, Part 8

The Mythical Man-Month

The Mythical Man-Month

In the seventh essay of The Mythical Man-Month, Fred P. Brooks begins to tackle the concept of estimating. While there are many estimating techniques, Brooks’ approach is a history/data-based approach, which we would understand as today as parametric estimation. Parametric estimation is generally a technique that generates a prediction of the effort needed to deliver a project based on historical data of productivity, staffing and quality. Estimating is not a straightforward extrapolation of has happened in the past to what will happen in the future, and mistaking it as such is fraught with potential issues. Brooks identified two potentially significant estimating errors that can occur when you use the past to predict the future without interpretation.

Often the only data available is the information about one part of the project’s life cycle. The first issue Brooks identified was that you cannot estimate the entire job or project by just estimating the coding and inferring the rest. There are many variables that might affect the relationship between development and testing. For example, some changes can impact more of the code than others, requiring more or less regression testing. The link between the effort required to deliver different types of work is not linear. The ability to estimate based on history requires a knowledge of project specific practices and attributes including competency, complexity and technical constraints.

Not all projects are the same. The second issue Brooks identified was that one type of project is not applicable for predicting another. Brooks used the differences between small projects and programming systems products to illustrate his point. Each type of work requires different activities, not just scaled up versions of the same tasks. Similarly, consider the differences in the tasks and activities required for building a smart phone app compared to building a large data warehouse application. Simply put, they are radically different. Brooks drove the point home using the analogy of extrapolating the record time for 100-yard dash (9.07 seconds according to Wikipedia) to the time to run a mile. The linear extrapolation would mean that a mile could be run in 2.40 (ish) minutes (a mile is 1760 yards) the current record is 3.43.13.

A significant portion of this essay is a review of a number of studies that illustrated the relationship between work done and the estimate. Brooks used these studies to highlight different factors that could impact the ability to extrapolate what has happened in the past to an estimate of the future (note: I infer from the descriptions that these studies dealt with the issue of completeness and relevance. The surveys, listed by  the person that generated the data, and the conclusions we can draw from an understanding of the data included:

  1. Charles Portman’s Data – Slippages occurred primarily because only half the time available was productive. Unrelated jobs meetings, paperwork, downtime, vacations and other non-productive tasks used the remainder.
  2. Joel Aron’s Data – Productivity was negatively related to the number of interactions among programmers. As the number of interactions goes up, productivity goes down.
  3. John Harr’s Data- The variation between estimates and actuals tend to be affected by the size of workgroups, length of time and number of modules. Complexity of the program being worked on could also be a contributor.
  4. OS/360 Data- Confirmed the striking differences in productivity driven by the complexity and difficulty of the task.
  5. Corbatoó’s Data – Programming languages affect productivity. Higher-level languages are more productive. Said a little differently, writing a function in Ruby on Rails requires less time than writing the same function in macro assembler language.

I believe that the surveys and data discussed are less important that the statistical recognition that there are many factors that must be addressed when trying to predict the future. In the end, estimation requires relevant historical data regardless of method, but the data must be relevant. Relevance is short hand for accounting for the factors that affect the type work you are doing. In homogeneous environments, complexity and language may not be as big a determinant of productivity as the number of interactions driven by team size or the amount of non-productive time teams have to absorb. The problem with historical data is that gathering the data requires effort, time and/or money.  The need to expend resources to generate, collect or purchase historical data is often used as a bugaboo to resist collecting the data and as a tool to avoid using parametric or historical estimating techniques.

Recognize that the the term historical data should not scare you away.  Historical data can be as simple as a Scrum team collecting their velocity or productivity every sprint and using it to calculate an average for planning and estimating. Historical data can be as complex as a pallet of information including project effort, size, duration, team capabilities and project context.

Previous installments of the Re-read of The Mythical Man-Month

Introductions and The Tar Pit

The Mythical Man-Month (The Essay)

The Surgical Team

Aristocracy, Democracy and System Design

The Second-System Effect

Passing the Word

Why did the Tower of Babel fall?


Categories: Process Management

The Ultimate Personal Productivity Platform is You

“Amateurs sit and wait for inspiration, the rest of us just get up and go to work.” ― Stephen King

The ultimate personal productivity platform is you.

Let’s just put that on the table right up front so you know where personal productivity ultimately comes from.  It’s you.

I can’t possibly give you anything that will help you perform better than an organized mind firing on all cylinders combined with self-awareness.

You are the one that ultimately has to envision your future.  You are the one that ultimately has to focus your attention.  You are the one that ultimately needs to choose your goals.  You are the one that ultimately has to find your motivation.  You are the one that ultimately needs to manage your energy.  You are the one that ultimately needs to manage your time.  You are the one that ultimately needs to take action.  You are the one that needs to balance work and life.

That’s a lot for you to do.

So the question isn’t are you capable?  Of course you are.

The real question is, how do you make the most of you?

Agile Results is a personal productivity platform to help you make the most of what you’ve got.

Agile Results is a simple system for getting better results.  It combines proven practices for productivity, time management, and motivation into a simple system you can use to achieve better, faster, easier results for work and life.

Agile Results works by integrating and synthesizing positive psychology, sport psychology, project management skills, and peak performance insights into little behavior changes you can do each day.  It’s also based on more than 10 years of extensive trial and error to help people achieve high performance.

If you don’t know how to get started, start simple:

Ask yourself the following question:  “What are three things I want to achieve today?”

And write those down.   That’s it.

You’re doing Agile Results.

Categories: Architecture, Programming

Angular 1 and Angular 2 integration: the path to seamless upgrade

Google Code Blog - Fri, 08/28/2015 - 20:05

Originally posted on the Angular blog.

Posted by, Misko Hevery, Software Engineer, Angular

Have an existing Angular 1 application and are wondering about upgrading to Angular 2? Well, read on and learn about our plans to support incremental upgrades.

Summary

Good news!

    • We're enabling mixing of Angular 1 and Angular 2 in the same application.
    • You can mix Angular 1 and Angular 2 components in the same view.
    • Angular 1 and Angular 2 can inject services across frameworks.
    • Data binding works across frameworks.

Why Upgrade?

Angular 2 provides many benefits over Angular 1 including dramatically better performance, more powerful templating, lazy loading, simpler APIs, easier debugging, even more testable and much more. Here are a few of the highlights:

Better performance

We've focused across many scenarios to make your apps snappy. 3x to 5x faster on initial render and re-render scenarios.

    • Faster change detection through monomorphic JS calls
    • Template precompilation and reuse
    • View caching
    • Lower memory usage / VM pressure
    • Linear (blindingly-fast) scalability with observable or immutable data structures
    • Dependency injection supports incremental loading
More powerful templating
    • Removes need for many directives
    • Statically analyzable - future tools and IDEs can discover errors at development time instead of run time
    • Allows template writers to determine binding usage rather than hard-wiring in the directive definition
Future possibilities

We've decoupled Angular 2's rendering from the DOM. We are actively working on supporting the following other capabilities that this decoupling enables:

    • Server-side rendering. Enables blinding-fast initial render and web-crawler support.
    • Web Workers. Move your app and most of Angular to a Web Worker thread to keep the UI smooth and responsive at all times.
    • Native mobile UI. We're enthusiastic about supporting the Web Platform in mobile apps. At the same time, some teams want to deliver fully native UIs on their iOS and Android mobile apps.
    • Compile as build step. Angular apps parse and compile their HTML templates. We're working to speed up initial rendering by moving the compile step into your build process.
Angular 1 and 2 running together

Angular 2 offers dramatic advantages over Angular 1 in performance, simplicity, and flexibility. We're making it easy for you to take advantage of these benefits in your existing Angular 1 applications by letting you seamlessly mix in components and services from Angular 2 into a single app. By doing so, you'll be able to upgrade an application one service or component at a time over many small commits.

For example, you may have an app that looks something like the diagram below. To get your feet wet with Angular 2, you decide to upgrade the left nav to an Angular 2 component. Once you're more confident, you decide to take advantage of Angular 2's rendering speed for the scrolling area in your main content area.

For this to work, four things need to interoperate between Angular 1 and Angular 2:

    • Dependency injection
    • Component nesting
    • Transclusion
    • Change detection

To make all this possible, we're building a library named ng-upgrade. You'll include ng-upgrade and Angular 2 in your existing Angular 1 app, and you'll be able to mix and match at will.

You can find full details and pseudocode in the original upgrade design doc or read on for an overview of the details on how this works. In future posts, we'll walk through specific examples of upgrading Angular 1 code to Angular 2.

Dependency Injection

First, we need to solve for communication between parts of your application. In Angular, the most common pattern for calling another class or function is through dependency injection. Angular 1 has a single root injector, while Angular 2 has a hierarchical injector. Upgrading services one at a time implies that the two injectors need to be able to provide instances from each other.

The ng-upgrade library will automatically make all of the Angular 1 injectables available in Angular 2. This means that your Angular 1 application services can now be injected anywhere in Angular 2 components or services.

Exposing an Angular 2 service into an Angular 1 injector will also be supported, but will require that you to provide a simple mapping configuration.

The result is that services can be easily moved one at a time from Angular 1 to Angular 2 over independent commits and communicate in a mixed-environment.

Component Nesting and Transclusion

In both versions of Angular, we define a component as a directive which has its own template. For incremental migration, you'll need to be able to migrate these components one at a time. This means that ng-upgrade needs to enable components from each framework to nest within each other.

To solve this, ng-upgrade will allow you to wrap Angular 1 components in a facade so that they can be used in an Angular 2 component. Conversely, you can wrap Angular 2 components to be used in Angular 1. This will fully work with transclusion in Angular 1 and its analog of content-projection in Angular 2.

In this nested-component world, each template is fully owned by either Angular 1 or Angular 2 and will fully follow its syntax and semantics. This is not an emulation mode which mostly looks like the other, but an actual execution in each framework, dependending on the type of component. This means that components which are upgraded to Angular 2 will get all of the benefits of Angular 2, and not just better syntax.

This also means that an Angular 1 component will always use Angular 1 Dependency Injection, even when used in an Angular 2 template, and an Angular 2 component will always use Angular 2 Dependency Injection, even when used in an Angular 1 template.

Change Detection

Mixing Angular 1 and Angular 2 components implies that Angular 1 scopes and Angular 2 components are interleaved. For this reason, ng-upgrade will make sure that the change detection (Scope digest in Angular 1 and Change Detectors in Angular 2) are interleaved in the same way to maintain a predictable evaluation order of expressions.

ng-upgrade takes this into account and bridges the scope digest from Angular 1 and change detection in Angular 2 in a way that creates a single cohesive digest cycle spanning both frameworks.

Typical application upgrade process

Here is an example of what an Angular 1 project upgrade to Angular 2 may look like.

  1. Include the Angular 2 and ng-upgrade libraries with your existing application
  2. Pick a component which you would like to migrate
    1. Edit an Angular 1 directive's template to conform to Angular 2 syntax
    2. Convert the directive's controller/linking function into Angular 2 syntax/semantics
    3. Use ng-upgrade to export the directive (now a Component) as an Angular 1 component (this is needed if you wish to call the new Angular 2 component from an Angular 1 template)
  3. Pick a service which you would would like to migrate
    1. Most services should require minimal to no change.
    2. Configure the service in Angular 2
    3. (optionally) re-export the service into Angular 1 using ng-upgrade if it's still used by other parts of your Angular 1 code.
  4. Repeat doing step #2 and #3 in order convenient for your application
  5. Once no more services/components need to be converted drop the top level Angular 1 bootstrap and replace with Angular 2 bootstrap.

Note that each individual change can be checked in separately and the application continues working letting you continue to release updates as you wish.

We are not planning on adding support for allowing non-component directives to be usable on both sides. We think most of the non-component directives are not needed in Angular 2 as they are supported directly by the new template syntax (i.e. ng-click vs (click) )

Q&A

I heard Angular 2 doesn't support 2-way bindings. How will I replace them?

Actually, Angular 2 supports two way data binding and ng-model, though with slightly different syntax.

When we set out to build Angular 2 we wanted to fix issues with the Angular digest cycle. To solve this we chose to create a unidirectional data-flow for change detection. At first it was not clear to us how the two way forms data-binding of ng-model in Angular 1 fits in, but we always knew that we had to make forms in Angular 2 as simple as forms in Angular 1.

After a few iterations we managed to fix what was broken with multiple digests and still retain the power and simplicity of ng-model in Angular 1.

Two way data-binding has a new syntax: [(property-name)]="expression" to make it explicit that the expression is bound in both directions. Because for most scenarios this is just a small syntactic change we expect easy migration.

As an example, if in Angular 1 you have: <input type="text" ng-model="model.name" />

You would convert to this in Angular 2: <input type="text" [(ng-model)]="model.name" />

What languages can I use with Angular 2?

Angular 2 APIs fully support coding in today's JavaScript (ES5), the next version of JavaScript (ES6 or ES2015), TypeScript, and Dart.

While it's a perfectly fine choice to continue with today's JavaScript, we'd like to recommend that you explore ES6 and TypeScript (which is a superset of ES6) as they provide dramatic improvements to your productivity.

ES6 provides much improved syntax and intraoperative standards for common libraries like promises and modules. TypeScript gives you dramatically better code navigation, automated refactoring in IDEs, documentation, finding errors, and more.

Both ES6 and TypeScript are easy to adopt as they are supersets of today's ES5. This means that all your existing code is valid and you can add their features a little at a time.

What should I do with $watch in our codebase?

tldr; $watch expressions need to be moved into declarative annotations. Those that don't fit there should take advantage of observables (reactive programing style).

In order to gain speed and predictability, in Angular 2 you specify watch expressions declaratively. The expressions are either in HTML templates and are auto-watched (no work for you), or have to be declaratively listed in the directive annotation.

One additional benefit from this is that Angular 2 applications can be safely minified/obfuscated for smaller payload.

For interapplication communication Angular 2 offers observables (reactive programing style).

What can I do today to prepare myself for the migration?

Follow the best practices and build your application using components and services in Angular 1 as described in the AngularJS Style Guide.

Wasn't the original upgrade plan to use the new Component Router?

The upgrade plan that we announced at ng-conf 2015 was based on upgrading a whole view at a time and having the Component Router handle communication between the two versions of Angular.

The feedback we received was that while yes, this was incremental, it wasn't incremental enough. We went back and redesigned for the plan as described above.

Are there more details you can share?

Yes! In the Angular 1 to Angular 2 Upgrade Strategy design doc.

We're working on a series of upcoming posts on related topics including:

  1. Mapping your Angular 1 knowledge to Angular 2.
  2. A set of FAQs on details around Angular 2.
  3. Detailed migration guide with working code samples.

See you back here soon!

Categories: Programming

Stuff The Internet Says On Scalability For August 28th, 2015x

Hey, it's HighScalability time:


The oldest known fossil of a flowering plant. 130 million years old. What digital will last so long?
  • 32.6: Ashley Madison password cracks per hour; 1 million: cores in the Human Brain Project's silicon brain; 54,000: tennis balls used at Wimbledon; 4 kB: size of first web page; 1.2 million: million messages per second Apache Samza performance on a single node; 27%: higher conversion for sites loading one second faster; 

  • Quotable Quotes:
    • @adrianco: Apple first read about Mesos on http://highscalability.com  and for a year have run Siri on the worlds biggest cluster 
    • @Besvinick: Interesting recurring sentiment from recent grads: We lived most of our college lives on Snapchat—now we don't have any "tangible" memories.
    • Robin Hobb: For most moments of our lives, we have forgotten almost all of the world around us, except for what currently claims our interest.
    • @Carnage4Life: I'd like to thank all the Amazon employees who cried at their desks to make this possible
Categories: Architecture