Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Why Trust is Hard

Herding Cats - Glen Alleman - 1 hour 54 min ago

Screen Shot 2014-10-29 at 10.09.40 PMHugh McCleod's art for Zappo's provides the foundation for trust in that environment

If I'm the head of HR, I'm responsible for filling the desks at my company with amazing employees. I can hold people to all the right standards. But ultimately I can't control what they do. This is why hiring for culture works. What Zappos does is radical because it trusts. It says "Go do the best job you can do for the customer, without policy". And leaves employees to come up with human solutions. Something it turns out they're quite good at, if given the chance.

Now let's take another domain, one I'm very famailar with - fault tolerant process control systems. Software and support hardware applied to emergency shutdown of exothermic chemical reactors - those that make the unleaded gasoline for our cars, nuclear reactors and conventional fired power generation system, gas turbine controls, and other must work properly machines. And a similar domain or DO-178 flight control systems, which must equally work without fail.

At Zappos the HR Diector describes a work environment where employess are free to do the best job you can for the customer. In the domains above, employees also work to do the best job for the customer you can, but flight safety, live safety, equipment safety are part of that best job. In other domains we work, doing the best job for the customer means processing with extremely low error rates, transactions for 100's of millions of dollars of value in the enterprise IT paradigm. 

Zappo's can recover from an error, other domains can't. Nonrecoverable errors mean serious loss of revenue, or even loss of live. In the other domains, failure is similar consequences. I come from those domains, they inform my view of the software development world - where software fail safe and fault tolerance is the basis of business success.

  So when we hear about the freedom to fail early and fail often in the absence of a domain or context, care is needed. Without a domain and context, it is difficult to assess the credibility of any concept. It comes down to Trust alone or Trust But Verify. I could also guarantee that Zappos has some of the verify process. It is doubtful employees re left to do anything they wish for the simple reason there is a business governance process at any firm, no matter the size. Behaviour, even full trust behavior fits inside that governance process.
Categories: Project Management

Splitting User Stories: Alternate Patterns

Too many things going on will lead to less attention to anyone subject.

Too many things going on will lead to less attention to anyone subject.

Splitting user stories is an important tool to help teams in a number ways ranging from improving the flow of stories through the development process, to improving the teams understanding of what is required to deliver the story. In almost every case, smaller is better.   We have identified a number techniques for splitting user stories and a framework for evaluating those splits. Additional splitting techniques include:

  1. And/Or Removal: User stories that include “and” or “or” typically reflects compound thoughts. This is an indication that the story is an epic, which will too large to be complete in a single sprint. Split the stories to eliminate instances of “and” and “or“. An example of a story with an “and / or” problem is: As a project manager I want to be able to review and approve time and expenses logged to my projects to ensure accurate reporting and billing. Stories could be constructed separately for reviewing time accounting, approving time accounting, reviewing expenses and approving expenses. Simplicity reduces the potential for confusion.
  2. Simple/Complex: Complexity makes a story harder to complete and therefore the story will take longer to deliver compared to a similarly-sized, simple story. Splitting can be used to isolate functionality that is more or less complex. Splitting based on complexity provides product owners the option of deciding on whether a strategy of doing the simple stories first. This approach could provide teams with insights that reduce the complexity of later stories.
  3. Splitting Non-functional Requirements: Many user stories combine function and non-functional components. For example the story “As a home brewer, I want a conversion calculator that returns results in 40 point type display so that I can determine the alcohol level in the beer.” The story could be split to address the functional side of the story (conversion results) from the non-functional component (size of display). Splitting the story lets team to deliver the calculation before having to address how it is displayed.

These three patterns for splitting user stories (in addition to those noted in previous articles including workflow, business rules, data variations, elementary processes or syntheses of patterns) are just tools for teams. Teams split stories to help them understand what they are committing to deliver, to reduce the complexity of large stories (or at the very least to isolate the hard parts) and so they can enhance their ability to consistently deliver value. Splitting stories increases productivity and quality and reduces the amount of time the team spends scratching their collective heads trying to figure out what they will deliver and how they will deliver.


Categories: Process Management

The fastest route between voice search and your app

Android Developers Blog - 10 hours 37 min ago
By Jarek Wilkiewicz, Developer Advocate, Google Search

How many lines of code will it take to let your users say Ok Google, and search for something in your app? Hardly any. Starting today, all you need is a small addition to your AndroidManifest.xml in order to connect the Google Now SEARCH_ACTION with your searchable activity:

<activity android:name=".SearchableActivity">
    <intent-filter>
        <action android:name="com.google.android.gms.actions.SEARCH_ACTION"/>
        <category android:name="android.intent.category.DEFAULT"/>
    </intent-filter>
</activity>

Once you make these changes, your app can receive the SEARCH_ACTION intent containing the SearchManager.QUERY extra with the search expression.

At Google, we always look for innovative ways to help you improve mobile search and drive user engagement back to your app. For example, users can now say to the Google app: “Ok Google, search pizza on Eat24” or “Ok Google, search for hotels in Maui on TripAdvisor.”

This feature is available on English locale Android devices running Jelly Bean and above with the Google app v3.5 or greater. Last but not least, users can enable the Ok Google hot-word detection from any screen, which offers them the fastest route between their search command and your app!


Join the discussion on
+Android Developers


Categories: Programming

No map is an island: Introducing a connected JavaScript Maps API experience

Google Code Blog - 13 hours 34 min ago
Cross-posted from the Google Geo Developers blog

Our digital lives are increasingly connected. We research on our laptops, look up directions on our phones and even navigate with our watches. And by creating maps unique to each user and offering features such as saved places, Google Maps has been making it easier to continue these tasks as we move from device to device.

However, although maps embedded from Google Maps are now built uniquely for every Google user, most of the now two million active sites and apps using the Maps APIs are still islands. When I look for a place to eat on Zagat, I can’t see how far away it is from work. When I look at a travel map in the New York Times, I can’t save those places in order to navigate to them later.

Today we’re taking a step towards connecting these two million sites and apps by introducing a signed-in JavaScript Maps API experience and a feature called attributed save. To help illustrate, we’ve partnered with the New York Times to bring this experience to their 36 hours travel column.

A connected JavaScript Maps API

When you add &signed_in=true to the Google Maps JavaScript API source url, your end users will have the option to sign into the map with their Google account. When they do so, your users will receive a map built for them, in the context of your app. Their saved places — including home and work addresses (if set by the end user) as well as other relevant places — will appear automatically on their map, providing a layer of context that anchors your content and makes it stand out even more.

Attributed save

Once users are signed into the Google Maps in your app, we can together create an integrated experience between your map content and Google Maps. With attributed save, signed-in users can save places from your app to be accessed later, with attribution and linkbacks, on Google Maps for the web, Android and iOS.

What’s more, you can also enable deep links into your mobile applications. For instance, users can save a place from your desktop app (such as Zagat.com), open up the place on Google Maps on their Android device, and deep link directly into your Android app.

Enabling attributed save is easy — just specify your app name, a link and a place search string or place ID when creating a marker and info window. Or use our SaveWidget to enable attributed save in your own custom info window.

In addition, we’re also launching attributed save across all embedded maps today. Attribution and linkback parameter will be inferred automatically from the domain and referrer of the host site, so if you’re using our embedded maps, you don’t need to do anything! If you’re using the Google Maps Embed API, you may customize the source and link back parameters yourself.

One final point: we’ve stated in the past that the JavaScript Maps API is cookieless if loaded from maps.googleapis.com. As of today, to enable the signed in maps experience on sites across the web, the signed-in version of the JavaScript Maps API now does rely on cookies to detect the end user’s signed-in state. Please review our documentation for further details.

That’s all for now. Go try it out. And remember, no map is an island, entire of itself...

Posted by Ken Hoetmer, Product Manager, Google Maps APIs
Categories: Programming

Quote of the Day

Herding Cats - Glen Alleman - 14 hours 30 min ago

Vision without Execution is HallucinationJeffrey E. Garten, The Mind Of The CEO

All the rhetoric around any idea needs actionable outcomes that can be tested in the market place, beyond the personal anecdotes of self-selected conversations.

 

Categories: Project Management

Quote of the Day

Herding Cats - Glen Alleman - 15 hours 30 min ago

The Sky's the limit when you don't know what you don't know.

Categories: Project Management

What’s Your Management 3.0 Story?

NOOP.NL - Jurgen Appelo - 17 hours 36 min ago
rea

Some readers told me they have used Moving Motivators during job interviews.

Some readers told me they used Delegation Boards on management teams.

Some readers told me they adopted Merit Money to get rid of bonuses.

I just visited REA Group in Melbourne, Australia, where they use lots of Management 3.0 practices, and have great experiences to share.

The post What’s Your Management 3.0 Story? appeared first on NOOP.NL.

Categories: Project Management

How to Be a 10x Better Speaker with 20 Benefits

NOOP.NL - Jurgen Appelo - 17 hours 50 min ago
speaking

Last week, one conference attendee told me my presentation at Agile Tour Toulouse was perfect.

It was very kind, but I didn’t believe her.

In five years, I have spoken at almost 100 conferences and joined a similar number of community and company events. Thanks to the many discussions I had with event organizers, I think I now understand how to be a more valuable speaker.

The post How to Be a 10x Better Speaker with 20 Benefits appeared first on NOOP.NL.

Categories: Project Management

How to Dockerize your Dropwizard Application

Xebia Blog - 20 hours 39 min ago

If you want to deploy your Dropwizard Application on a Docker server, you can Dockerize your Dropwizard Application. Since a Dropwizard Application is already packaged as an executable Java ARchive file, creating a Docker image for such an application should be easy.

 

In this blog, you will learn how to Dockerize a Dropwizard Application using 4 easy steps.

Before you start

  • You are going to use the Dropwizard-example application, which can be found at the Dropwizard GitHub repository.
  • Additionally you need Docker. I used Boot2Docker to run the Dockerized Dropwizard Application on my laptop. If you use boot2Docker, you may need this Boot2Docker workaround to access your Dockerized Dropwizard application.
  • This blog does not describe how to create Dropwizard applications. The Dropwizard getting started guide provides an excellent starting point if you like to know more about building your own Dropwizard applications.

 

Step 1: create a Dockerfile

You can start with creating a Dockerfile. Docker can automatically build images by reading the instructions described in this file. Your Dockerfile could look like this:

FROM dockerfile/java:openjdk-7-jdk

ADD dropwizard-example-1.0.0.jar /data/dropwizard-example-1.0.0.jar

ADD example.keystore /data/example.keystore

ADD example.yml /data/example.yml

RUN java -jar dropwizard-example-1.0.0.jar db migrate /data/example.yml

CMD java -jar dropwizard-example-1.0.0.jar server /data/example.yml

EXPOSE 8080

 

The Dropwizard Application needs a Java Runtime, so you can start from an base image already available at Docker Hub, for example: dockerfile/java:openjdk-7-jdk.

You must add the Dropwizard Application files to the image, using the ADD instruction in your Dockerfile.

Next, simply specify the commands of your Dropwizard Application, which you want to execute during image build and container runtime. In the example above, the db migrate command is executed when the Docker image is build and the server command is executed when you issue a Docker run command to create a running container.

Finally, the EXPOSE instruction tells Docker that your container will listen on the specified port(s) at runtime.

 

Step 2: build the Docker image

Place the Dockerfile and your application files in a directory and execute the Docker build command to build an Docker image.

docker@boot2docker:~$ docker build -t dropwizard/dropwizard-example ~/dropwizard/

 

In the console output you should be able to that the Dropwizard Application db migrate command is executed. If everything is ok, the last line reported informs you that the image is successfully build.

Successfully built dd547483b57b

 

Step 3: run the Docker image

Use the Docker run command to create a container based on the image you have created. If you need to find your image id use the Docker images command to list your images. It should take around 3 seconds to start the Dockerized Dropwizard example application.

Docker run –p 8080:8080 dd547483b57b

Notice that I included the –p option to include a network port binding, which maps 8080 inside the container to port 8080 on the Docker host.  You can verify whether your container is running using the docker ps command.

docker@boot2docker:~$ docker ps

CONTAINER ID        IMAGE                                  COMMAND                CREATED             STATUS              PORTS                    NAMES

3b6fb75adad6        dropwizard/dropwizard-example:latest   "/bin/sh -c 'java -j   3 minutes ago       Up 3 minutes        0.0.0.0:8080->8080/tcp   high_turing

 

  1. Test the application

Now the application is ready for use. You can access the application using your Docker host ip address and the forward port 8080. For example, use the Google Advanced Rest Client App to register “John Doe”.

GoogleRestClient

User Stories: Splitting User Stories and Adding Detail

 

At some point you need to dive into the detail.

At some point you need to dive into the detail.

A user story is a simple statement of need. A common format for a user story is “<persona> <goal> <benefit>”. Typically when a user story is initially formed it is not ready to be developed. Stories can lack detail because they don’t include acceptance criteria, might need additional detail or might to be broken down.

Acceptance criteria provide confirmation that the story does what was intended and can be used to create an acceptance test. They provide additional detail that helps the team develop an understanding of the story. I have found that acceptance criteria also provide an excellent platform for generating the conversations the user story process expects. In a perfect world acceptance criteria would be written when the story is originally developed or during backlog grooming at the latest.

As team members and stakeholders talk about user stories knowledge is generated. The knowledge that is generated can be housed in pictures, notes, wireframes, paper or functional prototypes to name a few tools in the team’s arsenal for generating a conversation and capturing that conversation. These “documents” need to be captured and linked to the story. The one attachment mechanism you do not want rely on in the long term is your memory.

Large user stories, almost by definition, lack detail. Epics (large user stories) need to be broken down so the team can gain a better understanding of the story, so they can complete the story during the sprint. Splitting stories is as a mechanism to expose functional detail. In Splitting User Stories Based on Elementary Processes, we used an example of large time accounting data entry story. The story was:

  • As a time accounting user, I want to maintain my time so that I can account for the work I do.

The story is well formed (it fits the format we are using), but because it is too large and it obscures a lot of important detail. The story could easily be broken down into smaller stories. For example, add time, change time, display time and delete time would be a quick functional split. Once the story is broken down and the new functionality is exposed, acceptance criteria can be generated providing which generate more detail and further even more knowledge (a virtuous cycle).

User stories evolve. In almost all scenarios I have witnessed additional information and knowledge is generated by the team as they split stories, digest acceptance criteria, have conversation, build models, prototypes and designs.


Categories: Process Management

Tips for integrating with Google Accounts on Android

Android Developers Blog - Tue, 10/28/2014 - 18:24
By Laurence Moroney, Developer Advocate

Happy Tuesday! We've had a few questions come in recently regarding Google Accounts on Android, so we've put this post together to show you some of our best practices. The tips today will focus on Android-based authentication, which is easily achieved through the integration of Google Play services. Let's get started.

Unique Identifiers

A common confusion happens when developers use the account name (a.k.a. email address) as the primary key to a Google Account. For instance, when using GoogleApiClient to sign in a user, a developer might use the following code inside of the onConnected callback for a registered GoogleApiClient.ConnectedCallbacks listener:

[Error prone pseudocode]
String accountName = Plus.AccountApi.getAccountName(mGoogleApiClient);
// createLocalAccount() is specific to the app's local storage strategy.
createLocalAccount(accountName);

While it is OK to store the email address for display or caching purposes, it is possible for users to change the primary email address on a Google Account. This can happen with various types of accounts, but these changes happen most often with Google Apps For Work accounts.

So what's a developer to do? Use the Google Account ID (as opposed to the Account name) to key any data for your app that is associated to a Google Account. For most apps, this simply means storing the Account ID and comparing the value each time the onConnected callback is invoked to ensure the data locally matches the currently logged in user. The API provides methods that allow you to get the Account ID from the Account Name. Here is an example snippet you might use:

[Google Play Services 6.1+]
String accountName = Plus.AccountApi.getAccountName(mGoogleApiClient);
String accountID = GoogleAuthUtil.getAccountId(accountName);
createLocalAccount(accountID);
[Earlier Versions of Google Play Services (please upgrade your client)]
Person currentUser = Plus.PeopleApi.getCurrentPerson(mGoogleApiClient);
String accountID = currentUser.getID();
createLocalAccount(accountID);

This will key the local data against a Google Account ID, which is unique and stable for the user even after changing an email address.

So, in the above scenario, if your data was keyed on an ID, you wouldn’t have to worry if your users change their email address. When they sign back in, they’ll still get the same ID, and you won’t need to do anything with your data.

Multiple Accounts

If your app supports multiple account connections simultaneously (like the Gmail user interface shown below), you are calling setAccountName on the GoogleApiClient.Builder when constructing GoogleApiClients. This requires you to store the account name as well as the Google Account ID within your app. However, the account name you’ve stored will be different if the user changes their primary email address. The easiest way to deal with this is to prompt the user to re-login. Then, update the account name when onConnected is called after login. Any time a login occurs you, can use code such as this to compare Account IDs and update the email address stored locally for the Account ID.

[Google Play Services 6.1+]
String accountName = Plus.AccountApi.getAccountName(mGoogleApiClient);
String accountID = GoogleAuthUtil.getAccountId(accountName);
// isExistingLocalAccount(), createLocalAccount(), 
// getLocalDataAccountName(), and updateLocalAccountName() 
// are all specific to the app's local storage strategy.
boolean existingLocalAccountData = isExistingLocalAccount(accountID);
if (!existingLocalAccountData) {
    // New Login.
    createLocalAccount(accountID, accountName);
} else {
    // Existing local data for this Google Account.
    String cachedAccountName = getLocalDataAccountName(accountID);    
    if (!cachedAccountName.equals(accountName)) {
        updateLocalAccountName(accountID, accountName);
    }
}

This scenario reinforces the importance of using the Account ID to store data all data in your app.

Online data

The same best practices above apply to storing data for Google Accounts in web servers for your app. If you are storing data on your servers in this manner and treating the email address as the primary key:

ID [Primary Key] Field 1 Field 2 Field 3 user1@gmail.com Value 1 Value 2 Value 3

You need to migrate to this model where the primary key is the Google Account ID.:

ID [Primary Key] Email Field 1 Field 2 Field 3 108759069548186989918 user1@gmail.com Value 1 Value 2 Value 3

If you don't make Google API calls from your web server, you might be able to depend on the Android application to notify your web server of changes to the primary email address when implementing the updateLocalAccountName method referenced in the multiple accounts sample code above. If you make Google API calls from your web server, you likely implemented it using the Cross-client authentication and can detect changes via the OAuth2 client libraries or REST endpoints on your server as well.

Conclusion

When using Google Account authentication for your app, it’s definitely a best practice to use the account ID, as opposed to the account name to distinguish data for the user. In this post, we saw three scenarios where you may need to make changes to make your apps more robust. With the growing adoption of Google for Work, users who are changing their email address, but keeping the same account ID, may occur more frequently, so we encourage all developers to make plans to update their code as soon as possible.


Join the discussion on
+Android Developers


Categories: Programming

Five Tips for Tactical Management

Sometimes, you just need to get on with the work. You need to give yourself some breathing room so you can think for a while. Here are some tips that will help you tackle the day-to-day management work:

  1. Schedule and conduct your one-on-ones. Being a manager means you make room for  the people stuff: the one-on-ones, the coaching and feedback or the meta-coaching or the meta-feedback that you offer in the one-on-ones. Those actions are tactical and if you don’t do them, they become strategic.
  2. As a manager, make sure you have team meetings. No, not serial status meetings. Never those. Problem solving meetings, please. The more managers you manage, the more critical this step is. If you miss these meetings, people notice. They wonder what’s wrong with you and they make up stories. While the stories might be interesting, you do not want people making stories up about what is wrong with you or your management, do you?
  3. Stop multitasking and delegate. Your people are way more capable than you think they are. Stop trying to do it all. Stop trying to do technical work if you are a manager. Take pride in your management work and do the management work.
  4. Stop estimating on behalf of your people. This is especially true for agile teams. If you don’t like the estimate, ask them why they think it will take that long, and then work with them on removing obstacles.
  5. If you have leftover time, it’s time to work on the strategic work. What is the most important work you and your team can do? What is your number one project? What work should you not be doing?  This is project portfolio management. You might find it difficult to make these decisions. But the more you make these decisions, the better it is for you and your group.

Okay, there are your five tips. Happy management.

Categories: Project Management

ScanAgile 2015 submissions are open!

Software Development Today - Vasco Duarte - Tue, 10/28/2014 - 18:15


Just a quick note today to let you know that the Call for Sessions for ScanAgile, the Agile Finland annual conference is open for submissions.
You can read the whole call for sessions here. You will find the submission form in that page as well.

For me the most interesting tracks are:

  • Off-Piste: interesting lessons learned about being agile and agile related topics, from other industries 
  • Black Piste: Topics for experienced agile practitioners
These are just some of the tracks. In Scan Agile there will also be tracks for those starting up or that have already started but are in the early phases of their Agile transformation journey. 


The Agile Finland Community is very active and has a long history of agile adoption and promotion. They have some of the most advanced practitioners in the world, so I am really looking forward to see who the Scan Agile team chooses for the 2015 lineup of the conference! 


Hope to see many of you there! 

Google Fit SDK available now

Google Code Blog - Tue, 10/28/2014 - 18:13

After previewing it earlier this summer, today the Google Fit APIs are fully available on Android, Android Wear and the web so that you can build and publish apps for users on Google Play. Head to developers.google.com/fit to learn more.

The Google Fit platform gives the user one place to keep all their fitness activities. With the user’s permission, any developer can store or read the user’s data from Google Fit and use it to build powerful and useful fitness experiences for their users.

For users, we’re also launching the Google Fit app on Google Play for smartphones, tablets, Wear, and on the web at google.com/fit. The Google Fit app provides users with effortless, all-day activity tracking, as well as displaying key fitness data that our partners have stored in the platform. This app will also provide an opportunity for users to discover apps that help them track their fitness goals using Google Fit.

To get a quick introduction to the Fit APIs, check out the Dev Byte videos below.


A number of partners from around the fitness industry have been hard at work preparing their apps for Google Fit. In the coming weeks, our previously-announced launch partners, Nike+ Running, Withings HealthMate, Runkeeper, Runtastic, and Noom Coach, will launch their Google Fit integrations. We’re also happy to announce 6 new Google Fit partners: Strava, MapMyRun, LynxFit, LifeSum, FatSecret, and Azumio. These new partners are also preparing great experiences that will launch soon.

Please join the Google Fit Developer Community to share ideas and get inspired. We can’t wait to see what you come up with!

Posted by Angana Ghosh, Product Manager, Google Fit

Categories: Programming

Focus on Outcomes…not Solutions

Software Requirements Blog - Seilevel.com - Tue, 10/28/2014 - 17:00
I had a conversation a few weeks ago with an executive at a large organization, and he mentioned that he had read an interesting article a few weeks back on how Business Analysts should be focusing on the outcomes, and not on solutions.  He was surprised at the suggestion of the article, and wanted to […]
Categories: Requirements

Sponsored Post: Apple, TokuMX, Hypertable, VSCO, Gannett, Sprout Social, Scalyr, FoundationDB, AiScaler, Aerospike, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Apple has multiple openings. Changing the world is all in a day's work at Apple. Imagine what you could do here. 
    • Site Reliability Engineer. As a member of the Apple Pay SRE team, you’re expected to not just find the issues, but to write code and fix them. You’ll be involved in all phases and layers of the application, and you’ll have a direct impact on the experience of millions of customers. Please apply here.
    • Software Engineering Manager. In this role, you will be communicating extensively with business teams across different organizations, development teams, support teams, infrastructure teams and management. You will also be responsible for working with cross-functional teams to delivery large initiatives. Please apply here
    • Sr. Software Developer. We are looking for a solid senior-level Java/C programmer who will be working on security software development. This software will provide the data protection, integrity, and service authentication services for iOS devices. Please apply here.
    • DevOps Software Engineer - Apple Pay, iOS Systems.  The iOS Systems team is looking for an outstanding DevOps software engineer to help make our rapidly growing platform manageable, scalable, and reliable using state of the art technologies and cutting edge system automation. Come join the team to strategize, architect, and build infrastructure to help our systems perform and scale. Please apply here

  • VSCO. Do you want to: ship the best digital tools and services for modern creatives at VSCO? Build next-generation operations with Ansible, Consul, Docker, and Vagrant? Autoscale AWS infrastructure to multiple Regions? Unify metrics, monitoring, and scaling? Build self-service tools for engineering teams? Contact me (Zo, zo@vs.co) and let’s talk about working together. vs.co/careers.

  • Gannett Digital is looking for talented Front-end developers with strong Python/Django experience to join their Development & Integrations team. The team focuses on video, user generated content, API integrations and cross-site features for Gannett Digital’s platform that powers sites such as http://www.usatoday.com, http://www.wbir.com or http://www.democratandchronicle.com. Please apply here.

  • Platform Software Engineer, Sprout Social, builds world-class social media management software designed and built for performance, scale, reliability and product agility. We pick the right tool for the job while being pragmatic and scrappy. Services are built in Python and Java using technologies like Cassandra and Hadoop, HBase and Redis, Storm and Finagle. At the moment we’re staring down a rapidly growing 20TB Hadoop cluster and about the same amount stored in MySQL and Cassandra. We have a lot of data and we want people hungry to work at scale. Apply here.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • Sign Up for New Aerospike Training Courses.  Aerospike now offers two certified training courses; Aerospike for Developers and Aerospike for Administrators & Operators, to help you get the most out of your deployment.  Find a training course near you. http://www.aerospike.com/aerospike-training/

  • November TokuMX Meetups for Those Interested in MongoDB. Join us in one of the following cities in November to learn more about TokuMX and hear TokuMX use cases. 11/5 - London;11/11 - San Jose; 11/12 - San Francisco. Not able to get to these cities? Check out our website for other upcoming Tokutek events in your area - www.tokutek.com/events.
Cool Products and Services
  • Hypertable Inc. Announces New UpTime Support Subscription Packages. The developer of Hypertable, an open-source, high-performance, massively scalable database, announces three new UpTime support subscription packages – Premium 24/7, Enterprise 24/7 and Basic. 24/7/365 support packages start at just $1995 per month for a ten node cluster -- $49.95 per machine, per month thereafter. For more information visit us on the Web at http://www.hypertable.com/. Connect with Hypertable: @hypertable--Blog.

  • FoundationDB launches SQL Layer. SQL Layer is an ANSI SQL engine that stores its data in the FoundationDB Key-Value Store, inheriting its exceptional properties like automatic fault tolerance and scalability. It is best suited for operational (OLTP) applications with high concurrency. Users of the Key Value store will have free access to SQL Layer. SQL Layer is also open source, you can get started with it on GitHub as well.

  • Diagnose server issues from a single tab. Scalyr replaces all your monitoring and log management services with one, so you can pinpoint and resolve issues without juggling multiple tools and tabs. Engineers say it's powerful and easy to use. Customer support teams use it to troubleshoot user issues. CTO's consider it a smart alternative to Splunk, with enterprise-grade functionality, sane pricing, and human support. Trusted by in-the-know companies like Codecademy – learn more!

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Cloud deployable. Free instant trial, no sign-up required.  http://aiscaler.com/

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

Azure: New Marketplace, Network Improvements, New Batch Service, Automation Service, more

ScottGu's Blog - Scott Guthrie - Tue, 10/28/2014 - 15:35

Today we released a major set of updates to Microsoft Azure. Today’s updates include:

  • Marketplace: Announcing Azure Marketplace and partnerships with key technology partners
  • Networking: Network Security Groups, Multi-NIC, Forced Tunneling, Source IP Affinity, and much more
  • Batch Computing: Public Preview of the new Azure Batch Computing Service
  • Automation: General Availability of the Azure Automation Service
  • Anti-malware: General Availability of Microsoft Anti-malware for Virtual Machines and Cloud Services
  • Virtual Machines: General Availability of many more VM extensions – PowerShell DSC, Octopus, VS Release Management

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them: Marketplace: Announcing Azure Marketplace and partnerships with key technology partners

Last week, at our Cloud Day event in San Francisco, I announced a new Azure Marketplace that helps to better connect Azure customers with partners, ISVs and startups.  With just a couple of clicks, you can now quickly discover, purchase, and deploy any number of solutions directly into Azure.

Exploring the Marketplace

You can explore the Azure Marketplace by clicking the Marketplace title that is pinned by default to the home-screen of the Azure Preview Portal:

image

Clicking the Marketplace tile will enable you to explore a large selection of applications, VM images, and services that you can provision into your Azure subscription:

image

Using the marketplace provides a super easy way to take advantage of a rich ecosystem of applications and services integrated to run great with Azure.  Today’s marketplace release includes multi-VM templates to run Hadoop clusters powered by Cloudera or Hortenworks, Linux VMs powered by Unbuntu, CoreOS, Suse, CentOS, Microsoft SharePoint Server Farms, Cassandra Clusters powered by DataStax, and a wide range of security virtual appliances.

You can click any of the items in the gallery to learn more about them and optionally deploy them.  Doing so will walk you though a simple to follow creation wizard that enables you to optionally configure how/where they will run, as well as display any additional pricing required for the apps/services/VM images that you select.

For example, below is all it takes to stand-up an 8-node DataStax Enterprise cluster:

image

Solutions you purchase through the Marketplace will be automatically billed to your Azure subscription (avoiding the need for you to setup a separate payment method).  Virtual Machine images will support the ability to bring your own license or rent the image license by the hour (which is ideal for proof of concept solutions or cases where you need the solution for only a short period of time).  Both Azure Direct customers as well as customers who pay using an Enterprise Agreement can take advantage of the Azure Marketplace starting today.

You can learn more about the Azure Marketplace as well as browse the items within it here.

Networking: Lots and lots of New Features and Improvements

This week’s Azure update includes a ton of new capabilities to the Azure networking stack.  You can use these new networking capabilities immediately in the North Europe region, and they will be supported worldwide in all regions in November 2014.  The new network capabilities include:

Network Security Groups

You can now create Network Security groups to define access control rules for inbound and outbound traffic to a Virtual machine or a group of virtual machines in a subnet. The security groups and the rules can be managed and updated independent of the life cycle of the VM.

Multi-NIC Support

You can now create and manage multiple virtual network interfaces (NICs) on a VM.  Multi-NIC support is a fundamental requirement for a majority of network virtual appliances that can be deployed in Azure. Having this support now enabled within Azure will enable even richer network virtual appliances to be used.

Forced Tunneling

You can now redirect or “force” all Internet-bound traffic that originates in a cloud application back through an on-premises network via a Site-to-Site VPN tunnel for inspection and auditing. This is a critical security capability for enterprise grade applications.

ExpressRoute Enhancements

You can now share a single ExpressRoute connection across multiple Azure subscriptions. Additionally, a single Virtual Network in Azure can now be linked to more than one ExpressRoute circuit, thereby enabling much richer backup and disaster recovery scenarios.

image

New VPN Gateway Sizes

To cater to the growing hybrid connectivity throughput needs and the number of cross premise sites, we are announcing the availability of a higher performance Azure VPN gateway. This will enable a faster ExpressRoute and Site-to-Site VPN gateways with more tunnels.

Operations and audit logs for VNet Gateways and ExpressRoute

You can now view operations logs for Virtual Network Gateways and ExpressRoute circuits. The Azure portal will now show operations logs and information on all API calls you make and important infrastructure changes made such as scheduled updates to gateways.

Advanced Virtual Network Gateway policies

We now enable the ability for you to control encryption for the tunnel between Virtual Networks. You now have a choice between 3DES, AES128, AES256 and Null encryption, and you can also enable Perfect Forward Secrecy (PFS) for IPsec/IKE gateways.

Source IP Affinity

The Azure Load Balancer now supports a new distribution mode called Source IP Affinity (also known as session affinity or client IP affinity). You can now load balance traffic based on a 2-tuple (Source-IP, Destination-IP) or 3-tuple (Source-IP, Destination-IP and Protocol) distribution modes.

Nested policies for Traffic Manager

You can now create nested policies for traffic management. This allows tremendous flexibility in creating powerful load-balancing and failover schemes to support the needs of larger, more complex deployments.

Portal Support for Managing Internal Load Balancer, Reserved and Instance IP addresses for Virtual Machines

It is now possible to use the Azure Preview Portal to manage creating and setting up internal load balancers, as well as reserved and instance IP addresses for virtual machines.

Automation: General Availability of Azure Automation Service

I am excited to announce the General Availability of the Azure Automation service. Azure Automation enables the creation, deployment, monitoring, and maintenance of resources in an Azure environment using a highly scalable and reliable workflow engine. The service can be used to orchestrate time-consuming and frequently repeated operational tasks across Azure and third-party systems while decreasing operating expenses.

Azure Automation allows you to build runbooks (PowerShell Workflows) to describe your administration processes, provides a secure global assets store so you don’t need to hardcode sensitive information within your runbooks, and offers scheduling so that runbooks can be triggered automatically.

Runbooks can automate a wide range of scenarios – from simple day to day manual tasks to complex processes that span multiple Azure services and 3rd party systems. Because Automation is built on PowerShell, you can take advantage of the many existing PowerShell modules, or author your own to integrate with third party systems.

Creating and Editing Runbooks

You can create a runbook from scratch, or start by importing an existing template in the runbook gallery:

image

Editing experience for runbooks can also be performed directly in the administration portal:

image

Pricing

Available as a pay-as-you-go service, Automation is billed based on the number of job run time minutes used in a given Azure subscription.  500 minutes of free job runtime credits are also included each month for Azure customers to use at no charge.

Learn More

To learn more about Azure Automation, check out the following resources:

Batch Service: Preview of Azure Batch - new job scheduling service for parallel and HPC apps

I’m excited to announce the public preview of our new Azure Batch Service. This new platform service provides “job scheduling as a service” with auto-scaling of compute resources, making it easy to run large-scale parallel and high performance computing (HPC) work in Azure. You submit jobs, we start the VMs, run your tasks, handle any failures, and then shut things down as work completes.

Azure Batch is the job scheduling engine that we use internally to manage encoding for Azure Media Services, and for testing Azure itself. With this preview, we are excited to expand our SDK with a new application framework from GreenButton, a company Microsoft acquired earlier in the year. The Azure Batch SDK makes it easy to cloud-enable parallel, cluster, and HPC applications by describing jobs with the required resources, data, and one or more compute tasks.

Azure Batch can be used to run large volumes of similar tasks or applications in parallel, programmatically. A command line program or script takes a set of data files as input, processes the data in a series of tasks, and produces a set of output files. Examples of batch workloads that customers are running today in Azure include calculating risk for banks and insurance companies, designing new consumer and industrial products, sequencing genes and developing new drugs, searching for new energy sources, rendering 3D animations, and transcoding video.

Azure Batch makes it easy for these customers to use hundreds, thousands, tens of thousands of cores, or more on demand. With job scheduling as a service, Azure developers can focus on using batch computing in their applications and delivering services without needing to build and manage a work queue, scaling resources up and down efficiently, dispatching tasks, and handling failures.

image

The scale of Azure helps batch computing customers get their work done faster, experiment with different designs, run larger and more precise models, and test a large number of different scenarios without having to invest in and maintain large clusters.

Learn more about Azure Batch and start using it for your applications today. Virtual Machines: General Availability of Microsoft Anti-Malware for VMs and Cloud Services

I’m excited to announce that the Microsoft Anti-malware security extension for Virtual Machines and Cloud Services is now generally available.  We are releasing it as a free capability that you can use at no additional charge.

The Microsoft Anti-malware security extension can be used to help identify and remove viruses, spyware or other malicious software.  It provides real-time protection from the latest threats and also supports on-demand scheduled scanning.  Enabling it is a good security best practice for applications hosted either on-premises or in the cloud.

Enabling the Anti-Malware Extension

You can select and configure the Microsoft Antimalware security extension for virtual machines using the Azure preview portal, Visual Studio or API’s/PowerShell.  Antimalware events are then logged to the customer configured Azure Storage account via Azure Diagnostics and can be piped to HDInsight or a SIEM tool for further analysis. More information is available in the Microsoft Antimalware Whitepaper.

To enable antimalware feature on existing virtual machine, select the EXTENSIONS tile on a Virtual Machine in the Azure Preview Portal, then click ADD in the command bar and select the Microsoft Antimalware extension. Then, click CREATE and customize any settings:

image Virtual Machines: General Availability of even more VM Extensions

In addition to enabling the Microsoft Anti-Malware extension for Virtual Machines, today’s release also includes support for a whole bunch more new VM extensions that you can enable within your Virtual Machines.  These extensions can be added and configured using the same EXTENSIONS tile on Virtual Machine resources within the Azure Preview Portal (the same screen-shot as in the Anti-malware section above).

The new extensions enabled today include:

PowerShell Desired State Configuration

The PowerShell Desired State Configuration Extension can be used to deploy and configure Azure VMs using Desired State Configuration (DSC) technology. DSC enables you to declaratively specify how you want your software environment to be configured. DSC configuration can also be automated using the Azure PowerShell SDK, and you can push configurations to any Azure VM and have them enacted automatically. For more details, please see this desired state configuration blog post.

image 

Octopus

Octopus simplifies the deployment of ASP.NET web applications, Windows Services and other applications by automatically configuring IIS, installing services and making configuration changes. Octopus integration of Azure was one of the top requested features on Azure UserVoice and with this integration we will simplify the deployment and configuration of octopus on the VM.

image

Visual Studio Release Management

Release Management for Visual Studio is a continuous delivery solution that automates the release process through all of your environments from TFS through to production. Visual Studio Release Management is integrated with TFS and you can configure multi-stage release pipelines to automatically deploy and validate your applications on multiple environments. With the new Visual Studio Release Management extension, VMs can be preconfigured with the necessary components for required for Release Management to operate.

image Summary

Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microsoft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu omni

Categories: Architecture, Programming

Software Development Conferences Forecast October 2014

From the Editor of Methods & Tools - Tue, 10/28/2014 - 15:00
Here is a list of software development related conferences and events on Agile ( Scrum, Lean, Kanban) software testing and software quality, programming (Java, .NET, JavaScript, Ruby, Python, PHP) and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods & Tools software development magazine. W-JAX 2014, November 3-7 2014, Munich, Germany Business Technology Days 2014, November 3-6, Munich, Germany QCon San Francisco, November 3-7 2014, San Francisco, USA Exclusive $50 Method & Tools discount with promo code “softdevconf50″ Better Software & Agile Development Conference ...

Commitment-Driven Sprint Planning

Mike Cohn's Blog - Tue, 10/28/2014 - 15:00

There are two primary ways for planning a sprint: velocity-driven sprint planning and commitment-driven sprint planning. In last week’s post, I described velocity-driven planning; so in this week’s, we turn our attention to commitment-driven sprint planning.

A commitment-driven sprint planning meeting involves the product owner, ScrumMaster and all development team members. The product owner brings the top-priority product backlog items into the meeting and describes them to the team, usually starting with an overview of the set of high-priority items.

Select an Item

Following that, team members select a first item to bring into the sprint. This will almost always be the product owner’s top-priority item, but it is possible that the product owner’s top priority has too many open issues.

Ideally, a team should be able to still bring that item into the sprint and resolve the issues early enough in the sprint to complete the item. But, it’s possible that there are so many issues, that the issues are so significant, or that resolving the issues would take so much time (for example, the need to convene a meeting with 25 user representatives) that the product owner’s top priority is skipped.

Tasks and Hours

Having selected a high-priority item, team members discuss the work involved, and identify the tasks that will be necessary to deliver the product backlog item. Either concurrent with identifying the tasks or immediately after they finish doing so, team members roughly estimate the number of hours each task will take.

Do not ask or expect a team to think of every task that will be done during the sprint. Not only is that impossible, it is also unnecessary.

Teams should think of enough of the tasks that they feel they have thought through the work—but it is important to realize that thinking through the work is the real goal of this meeting. Identifying tasks and hours is secondary.

Asking for Commitment

After they’ve identified tasks and roughly estimated the hours for that one product backlog item, the team members ask themselves, “Can we commit to this?”

I find it very important that the team members ask this collectively of themselves rather than having a ScrumMaster ask, “Can you commit to this?” When team members ask, “Can we commit?” they are committing to each other rather than to the ScrumMaster.

I don’t know about you, but my early, pre-Scrum career is littered with broken “commitments” to bosses who asked if I could deliver something while making it clear my answer better be yes.

The ScrumMaster isn’t a boss and shouldn’t create that type of feeling among team members, but the person is called “master” – and it’s better not to risk being perceived as a boss insisting on a commitment.

Coach a team to ask, “Can we commit?” and it’s clear that they are committing to one another, which will likely be a stronger commitment.

Further, by having the team ask themselves, “Can we commit?” it is clear that the answer should be, “Yes we can” or “No we can’t.” When a ScrumMaster asks, “Can you commit?” some team members will properly answer with “we” but others will answer with “I.”

Scrum demands a full-team commitment: If you’re behind, I’ll help, and I know you’ll do the same for me. It’s not “these are my tasks” and “those are yours.”

Repeat with More Stories

If the team agrees they can commit to a product backlog item, they select another item and repeat the process. And so it goes—tasks, hours and commitment—until someone says they cannot commit to the selected product backlog item.

If someone cannot commit, team members will generally discuss the situation and see if someone else is available to help—perhaps a DBA with rudimentary JavaScript skills can help an overwhelmed JavaScript developer.

If not, perhaps that story can be put back on the product backlog but a smaller item can be brought in, or an item that needs less of the skills possessed by the person who could not commit.

No Role for Points or Velocity?

You may have noticed that in the process so far, there has been no role for story points or velocity. Although I still recommend that product backlog items be given quick, high-level estimates in story points, neither story points nor velocity play a role in commitment-driven sprint planning as described so far.

They do, however, play an important role in the final step of a sprint planning meeting.

Sanity Checking the Commitment

Once team members have filled their available time in the sprint, the ScrumMaster can look at the selected product backlog items, sum the story points assigned to each, and share that sum with the team. Team members can then compare it to the average or recent velocity.

Suppose a team with an average velocity of 20 conducts a commitment-driven sprint planning meeting and selects 19 points of work. They’ve done this without knowing the story point values on any of the selected product backlog items.

When their ScrumMaster tells them they’ve just selected 19 points of work and have an average velocity of 19, that team should feel very confident they’ve selected an appropriate amount of work for the sprint.

Suppose instead, though, that the ScrumMaster for this team announces they’ve selected only 11 points of work. They might in that case ask themselves why they were making the work so hard during sprint planning as compared to when they’d earlier estimated the same items in story points.

For example, this may reveal that during sprint planning they’d identified work they’d earlier not thought about, or perhaps had explicitly assumed would not be part of a given story. Or they may discover that the story really is harder than they’d thought when assigning points to it.

Either way, knowing they’d selected 11 yet averaged 20 will help the team know they’ve selected an appropriate amount of work or perhaps make a change to bring more.

Similarly, if the ScrumMaster announces that the team has selected 30 points, 10 more than their average velocity, the team may wonder what they are forgetting to consider. “Why,” they would discuss, “does this work seem so much easier after sprint planning than it did while estimating story points?”

So: story points and velocity do not play a role during the main portion of a commitment-driven sprint planning meeting. But they play the vital role of acting as a sanity check and confirmation of the plan.

It’s a Commitment, Not a Guarantee

It is important that the team’s commitment not be viewed as a guarantee. As Clint Eastwood said in one of his movies, “If you want a guarantee, buy a toaster.”

The team’s commitment is to do its best. I’d like to see them make their commitment perhaps 80 percent of the time. It should be something they take seriously and should make most of that time. That’s needed for the business to gain confidence in what a team says it can deliver.

However, finishing everything they say they will 100 percent of the time should not be the goal. A team forced to finish everything every time will do so—but by reducing what they commit to.

I originally named this approach commitment-driven sprint planning in my Agile Estimating and Planning book; others have taken to calling this “capacity-based planning.” I’m beginning to like the latter term better because of how easily a team’s commitment can be forced into being a guarantee.