Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Story Telling Techniques: The Premortem

Identify the risks before you start with a premortem.

Storytelling generates the big picture to guide a project or to help people frame their thoughts. A story can provide a deeper and more nuanced connection with information than most lists of PowerPoint bullets or a structured requirements documents. Storytelling can be used as a risk management tool.  Premortems are a useful tool for helping project teams anticipate risks.  Premortems were described in the Harvard Business Review, September 2007 by Gary Klein.  The basic premortem approach can be can be customized with storytelling to increase the power of the technique.

The basic premortem technique is as follows:

Step 1 – Prepare: Gather the project.

Step 2 – Have the team assume the project has utterly failed and then ask the question what caused the failure.

Step 3 – Ask each person to quietly write down all of the reasons they think the failure occurred in three minutes.

Step 4 – Using a round robin approach have each person shares one item on their list at a time with a facilitator recording the reasons on a whiteboard or flipchart.  Continue until all items are shared and recorded.

**The first four steps help defeat groupthink.

Step 5 – Identify the top 3 -5 items on the list and create user stories identifying the risks.  These high priority risks will be added to the backlog and revisited during grooming.  Common issues should be added to the team’s definition of done.

Step 6 – Periodically review the overall list with the team to determine whether any of the risks not added in step 5 have become more urgent. 

A more powerful twist to the standard process replaces steps 2 – 4 with a storytelling technique.

  • Break the team into pairs.
  • Provide the participants with an overview of the storytelling process, storytelling formats and the goal of the session.
  • Provide the participants with the premise that the project has failed and ask them to tell the story of how that point was reached.
  • Use probing questions to help the teams progress in generating the story. The sub-teams should be cross-functional. Time box this portion of the session to 15 minutes.
  • Have each team debrief the group with their stories.
  • Have the full team identify the issues that shaped the stories, these are potential risks.  The most critical risks should be added to the backlog (or if common to the definition of done).

When using the premortem storytelling technique there are a few important rules (many of these are useful for all types of storytelling sessions).

  1. Minimize interruptions, close laptops and have people put their phones away (consider collecting people’s phones).
  2. Set aside approximately two hours for generating the stories and to discuss the results.
  3. The whole project team and important stakeholders should be present or you will risk blind spots.
  4. If some members are not present video conferencing is important to create personal connections.
  5. A facilitator is important to making the process effective.  The facilitator should not be a critical team member or stakeholder. 
  6. The facilitator must ensure that the stories from the session are captured and the top 3 – 5 (more or less based on team’s discretion) are added to the product backlog. 

The premortem is an excellent tool to increase the team’s involvement and understanding of the understanding of risks.  Adding storytelling to the technique increases the richness of the experience over common brainstorming and listing techniques.  The results of a storytelling premortem will not only identify risks but provide the context of how the team members think the risks will emerge and turn into issues.  

 


Categories: Process Management

Principles, Processes, and Practices of Project Success

Herding Cats - Glen Alleman - Tue, 03/28/2017 - 22:01

In the project management world, everyone is selling something to solve some problem. This includes product vendors, consulting firms, and internal providers. I'm on the internal provider side most of the time. Other times, I on the consulting firm acting as an internal provider. I not on the vendor side.

Over the years (30 something years) I come to understand and write about the Principles, Processes, and Practices of project success in a wide variety of domains. From software systems to heavy construction, to metal bending companies, petrochemicals, pulp and paper, drugs, consumer products, industrial products, and intellectual property.

Over this time I've been guided by project planning and control people, engineers, sales people, marketing people, and senior business leaders. 

Here's what I've learned. There are 5 Principles, 5 Practices, and 5 Process that can be applied to increase the probability of success of all projects.

The notion that we should move our focus to Products and away from Projects ignores (sometimes willfully) what the role of a project is. Those suggesting we don't need projects (#NoProjects) probably don't understand the actual role of project work, but that another post.

Here's the summary of these Principles, Practices, and Processes. 

Screen Shot 2017-03-28 at 2.03.20 PM

So when someone suggests some new way of managing a project, use this to check to see if their new ways covers the Principles, Processes, and Practices.

Related articles Risk Management is How Adults Manage Projects Root Cause of Project Failure Want To Learn How To Estimate?
Categories: Project Management

A New Home for Google Open Source

Google Code Blog - Tue, 03/28/2017 - 19:36
Originally on Google Open Source Blog
Posted by Will Norris, Open Source Programs Office

Free and open source software has been part of our technical and organizational foundation since Google's early beginnings. From servers running the Linux kernel to an internal culture of being able to patch any other team's code, open source is part of everything we do. In return, we've released millions of lines of open source code, run programs like Google Summer of Code and Google Code-in, and sponsor open source projects and communities through organizations like Software Freedom Conservancy, the Apache Software Foundation, and many others.

Today, we're launching opensource.google.com, a new website for Google Open Source that ties together all of our initiatives with information on how we use, release, and support open source.

This new site showcases the breadth and depth of our love for open source. It will contain the expected things: our programs, organizations we support, and a comprehensive list of open source projects we've released. But it also contains something unexpected: a look under the hood at how we "do" open source.
Helping you find interesting open sourceOne of the tenets of our philosophy towards releasing open source code is that "more is better." We don't know which projects will find an audience, so we help teams release code whenever possible. As a result, we have released thousands of projects under open source licenses ranging from larger products like TensorFlow, Go, and Kubernetes to smaller projects such as Light My Piano, Neuroglancerand Periph.io. Some are fully supported while others are experimental or just for fun. With so many projects spread across 100 GitHub organizations and our self-hosted Git service, it can be difficult to see the scope and scale of our open source footprint.

To provide a more complete picture, we are launching a directory of our open source projects which we will expand over time. For many of these projects we are also adding information about how they are used inside Google. In the future, we hope to add more information about project lifecycle and maturity.
How we do open sourceOpen source is about more than just code; it's also about community and process. Participating in open source projects and communities as a large corporation comes with its own unique set of challenges. In 2014, we helped form the TODO Group, which provides a forum to collaborate and share best practices among companies that are deeply committed to open source. Inspired by many discussions we've had over the years, today we are publishing our internal documentation for how we do open source at Google.

These docs explain the process we follow for releasing new open source projects, submitting patches to others' projects, and how we manage the open source code that we bring into the company and use ourselves. But in addition to the how, it outlines why we do things the way we do, such as why we only use code under certain licenses or why we require contributor license agreements for all patches we receive.

Our policies and procedures are informed by many years of experience and lessons we've learned along the way. We know that our particular approach to open source might not be right for everyone—there's more than one way to do open source—and so these docs should not be read as a "how-to" guide. Similar to how it can be valuable to read another engineer's source code to see how they solved a problem, we hope that others find value in seeing how we approach and think about open source at Google.

To hear a little more about the backstory of the new Google Open Source site, we invite you to listen to the latest episode from our friends at The Changelog. We hope you enjoy exploring the new site!
Categories: Programming

Calling all early adopters for Android Studio previews

Android Developers Blog - Tue, 03/28/2017 - 17:03
Posted by Scott Main, Technical Writer

If you love trying out all of the newest features in Android Studio and helping us make it a better IDE, we're making it even easier to download early preview builds with a new website. Here, you can download and stay up to date on all the latest Android Studio previews and other tools announcements.



Android Studio previews give you early access to new features in all aspects of the IDE, plus early versions of other tools such as the Android Emulator and platform SDK previews. You can install multiple versions of Android Studio side-by-side, so if a bug in the preview build blocks your app development, you can keep working on the same project from the stable version.

The latest preview for Android Studio 2.4 just came out last week, and it includes new features to support development with the Android O Developer Preview. You can download and set up the O preview SDK from inside Android Studio, and then use Android O’s XML font resources and autosizing TextView in the Layout Editor.

By building your apps with the Android Studio preview, you're also helping us create a better version of Android Studio. We want to hear from you if you encounter any bugs.
Categories: Programming

Better User Stories: 24 Hours Until Doors Close

Mike Cohn's Blog - Tue, 03/28/2017 - 17:00

Just a quick post this week to let you know that we will be closing registration to Better User Stories tomorrow at 6 P.M. Pacific, 9 P.M. Eastern.

We still have spaces for the Expert and Professional Levels, but Work With Mike is now completely sold out.

Click here to register before the deadline

Just a quick reminder of what people are saying about the course:

I could squeeze videos in between meeting packed days

“I loved the acronyms used to test story quality and that the modules were broken up into small enough segments that I could squeeze videos in between meeting packed days… I really enjoyed the worksheets that forced me to use my own backlog as practice to cement the concepts in my brain. It's way too easy to go through an online course and not really retain information that is useful later but that's what made it real for me.” - Sarah Fraser

Immediately able to apply what I learned

“I've used user stories for many years. I wasn't sure if this course was really going to teach me something new… I thought if anyone is going to be able to teach me more about user stories it will be Mike Cohn… The Q&A calls with the training were great. I think this is a big differentiator to other online trainings I've done. I was immediately able to apply what I learned in this course to support teams get their backlog set up as they begin delivering using the scrum framework. - Amber Burke

If you’re on the fence, jump in…

“It has already influenced and changed how I deliver story writing workshops. There is a lot of valuable information. It is split up into logical and digestible segments. For anyone willing to put in the time that needs to understand how to write better stories; you will find value here. If you're on the fence, jump in...you won't regret it.” - Max Lamers

You still have (some) time to access the free mini-course

When we close registration to the full Better User Stories course, we will also be taking down the free video training. If you’ve not yet seen those, you still have time to register and watch them before tomorrow’s deadline.

Click here to access the free mini-course

I don’t know when we’ll be opening doors again to the full, advanced course, so if you and your team want to sharpen your user stories skills, this is a great time to join.

Any last minute questions about the course? Let me know in the comments below.

Agile & Software Testing in Methods & Tools Q1 2017 articles

From the Editor of Methods & Tools - Tue, 03/28/2017 - 15:35
Here is a list of the articles published during the first quarter of 2017 on the Methods & Tools website. This quarter Methods & Tools has published articles discussing the management debt issue in Agile transformation, Agile forecasting and testing microservices. We also published two articles presenting open source software testing tools: CasperJS and Nitrate. […]

Own the Future of Management, with Me

NOOP.NL - Jurgen Appelo - Tue, 03/28/2017 - 12:48
You have just 3 days left to become my business partner.

Since today, I am no longer the only owner of the Management 3.0 business. I have co-owners. Yay!! And you can now join me as a co-owner too.

Management 3.0 is about better management with fewer managers. The ideas and practices help leaders and other change agents with managing for more happiness at work. And the brand was named the leader of the Third Wave of Agile, because of our focus on the entire business, rather than just on development practices and projects. By improving the management of organizations, we hope we are helping to make the world a better place.

As a co-owner of Management 3.0, you support and participate in my adventure. Either passively or actively, you help our team to offer healthy games, tools, and practices to more managers in more organizations.

Since last week, a Foundation owns the Management 3.0 business model and this Foundation has issued virtual shares. The business has grown by more than 40% each year in 2014, 2015, and 2016. In other words, the ownership of shares not only contributes to happier people in healthier organizations. It is also a smart business investment!

Until 31 March 2017, I sell virtual shares (officially: certificates) for EUR 50 per share. On 1 April 2017, I stop selling them for a while. I may continue selling more shares later, but probably at a higher price. And there are more reasons not to wait!

When you buy 10 or more shares before 1 April 2017, I send you a free copy of the book Managing for Happiness, personally signed by me on a unique hand-drawn bookplate.

When you buy 100 or more shares before 1 April 2017, you are entitled to a free one-hour keynote on location (excluding travel and accommodation expenses).

When you buy 1,000 or more shares before 1 April 2017, you gain the status of honored business partner, with special privileges and exclusive access to the team and me.

And everyone who buys shares has a chance to win one of my last eight copies of #Workout, the exclusive Limited Edition. Some people sell it for $2000+ on Amazon.

It is important to known that Management 3.0 is a global brand. I prefer that ownership is distributed across the world. I reserve the right not to sell too many shares to people in the same country. (And yes, it’s first come, first served.)

What are the next steps?

1. Check out my FAQ for all details (read it here);
2. Fill out the application form (APPLY HERE);
3. Sign the simple agreement (I will send it);
4. Pay the share price (information will follow).

I asked the notary and my accountant to make it so simple that it’s five minutes of work and you could be co-owner in one day.

When this simple procedure is complete, we add you to the exclusive list of Management 3.0 owners. You can proudly wear a bragging badge on your website, and the team will inform you about new developments on a regular basis.

Don’t wait too long!

This offer is valid until 31 March 2017. The available shares per country are limited.

OWN THE FUTURE OF MANAGEMENT – APPLY NOW

The post Own the Future of Management, with Me appeared first on NOOP.NL.

Categories: Project Management

Luigi: Defining dynamic requirements (on output files)

Mark Needham - Tue, 03/28/2017 - 06:39

In my last blog post I showed how to convert a JSON document containing meetup groups into a CSV file using Luigi, the Python library for building data pipelines. As well as creating that CSV file I wanted to go back to the meetup.com API and download all the members of those groups.

This was a rough flow of what i wanted to do:

  • Take JSON document containing all groups
  • Parse that document and for each group:
    • Call the /members endpoint
    • Save each one of those files as a JSON file
  • Iterate over all those JSON files and create a members CSV file

In the previous post we created the GroupsToJSON task which calls the /groups endpoint on the meetup API and creates the file /tmp/groups.json.

Our new task has that as its initial requirement:

class MembersToCSV(luigi.Task):
    key = luigi.Parameter()
    lat = luigi.Parameter()
    lon = luigi.Parameter()

    def requires(self):
        yield GroupsToJSON(self.key, self.lat, self.lon)

But we also want to create a requirement on a task that will make those calls to the /members endpoint and store the result in a JSON file.

One of the patterns that Luigi imposes on us is that each task should only create one file so actually we have a requirement on a collection of tasks rather than just one. It took me a little while to get my head around that!

We don’t know the parameters of those tasks at compile time – we can only calculate them by parsing the JSON file produced by GroupsToJSON.

In Luigi terminology what we want to create is a dynamic requirement. A dynamic requirement is defined inside the run method of a task and can rely on the output of any tasks specified in the requires method, which is exactly what we need.

This code does the delegating part of the job:

class MembersToCSV(luigi.Task):
    key = luigi.Parameter()
    lat = luigi.Parameter()
    lon = luigi.Parameter()


    def run(self):
        outputs = []
        for input in self.input():
            with input.open('r') as group_file:
                groups_json = json.load(group_file)
                groups = [str(group['id']) for group in groups_json]


                for group_id in groups:
                    members = MembersToJSON(group_id, self.key)
                    outputs.append(members.output().path)
                    yield members


    def requires(self):
        yield GroupsToJSON(self.key, self.lat, self.lon)

Inside our run method we iterate over the output of GroupsToJSON (which is our input) and we yield to another task as well as collecting its outputs in the array outputs that we’ll use later.
MembersToJSON looks like this:

class MembersToJSON(luigi.Task):
    group_id = luigi.IntParameter()
    key = luigi.Parameter()


    def run(self):
        results = []
        uri = "https://api.meetup.com/2/members?&group_id={0}&key={1}".format(self.group_id, self.key)
        while True:
            if uri is None:
                break
            r = requests.get(uri)
            response = r.json()
            for result in response["results"]:
                results.append(result)
            uri = response["meta"]["next"] if response["meta"]["next"] else None


        with self.output().open("w") as output:
            json.dump(results, output)

    def output(self):
        return luigi.LocalTarget("/tmp/members/{0}.json".format(self.group_id))

This task generates one file per group containing a list of all the members of that group.

We can now go back to MembersToCSV and convert those JSON files into a single CSV file:

class MembersToCSV(luigi.Task):
    out_path = "/tmp/members.csv"
    key = luigi.Parameter()
    lat = luigi.Parameter()
    lon = luigi.Parameter()


    def run(self):
        outputs = []
        for input in self.input():
            with input.open('r') as group_file:
                groups_json = json.load(group_file)
                groups = [str(group['id']) for group in groups_json]


                for group_id in groups:
                    members = MembersToJSON(group_id, self.key)
                    outputs.append(members.output().path)
                    yield members

        with self.output().open("w") as output:
            writer = csv.writer(output, delimiter=",")
            writer.writerow(["id", "name", "joined", "topics", "groupId"])

            for path in outputs:
                group_id = path.split("/")[-1].replace(".json", "")
                with open(path) as json_data:
                    d = json.load(json_data)
                    for member in d:
                        topic_ids = ";".join([str(topic["id"]) for topic in member["topics"]])
                        if "name" in member:
                            writer.writerow([member["id"], member["name"], member["joined"], topic_ids, group_id])

    def output(self):
        return luigi.LocalTarget(self.out_path)

    def requires(self):
        yield GroupsToJSON(self.key, self.lat, self.lon)

We then just need to add our new task as a requirement of the wrapper task:

And we’re ready to roll:

$ PYTHONPATH="." luigi --module blog --local-scheduler Meetup --workers 3

We’ve defined the number of workers here as we can execute those calls to the /members endpoint in parallel and there are ~ 600 calls to make.

All the code from both blog posts is available as a gist if you want to play around with it.

Any questions/advice let me know in the comments or I’m @markhneedham on twitter.

The post Luigi: Defining dynamic requirements (on output files) appeared first on Mark Needham.

Categories: Programming

Join us live on May 23rd as we announce the latest Ads, Analytics and DoubleClick innovations

Google Code Blog - Mon, 03/27/2017 - 19:03
Posted by Sridhar Ramaswamy Senior Vice President, Ads and Commerce

What: Google Marketing Next keynote live stream
When: Tuesday, May 23rd at 9:00 a.m. PT/12:00 p.m. ET.
Duration: 1 hour
Where: On the Inside AdWords Blog



Be the first to hear about Google’s latest marketing innovations, the moment they’re announced. Watch live as my team and I share new Ads, Analytics and DoubleClick innovations designed to improve your ability to reach consumers, simplify campaign measurement and increase your productivity. We’ll also give you a sneak peek at how brands are starting to use the Google Assistant to delight customers.

Register for the live stream here.

Until then, follow us on Twitter, Google+, Facebook and LinkedIn for previews of what's to come.
Categories: Programming

AgilePath Podcast Up

I’ve said before that agile is a cultural change, not merely a project management framework or approach. One of the big changes is around transparency and safety.

We need safety to experiment. We need safety to be transparent. Creating that safe environment can be difficult for everyone involved.

John LeDrew has started a new podcast, agilepath.fm. I had the pleasure of chatting with John for the podcast. He wove a story with several other interviewers and it’s now up, In search of Safety.

I hope you enjoy it.

Categories: Project Management

Faster Networks + Cheaper Messages => Microservices => Functions => Edge

When Adrian Cockroft—the guy who helped put the loud in Cloud through his energetic evangelism of Cloud Native and Microservice architectures—talks about what’s next, it pays to listen. And you can listen, here’s a fascinating forward looking talk he gave at microXchg 2017: Shrinking Microservices to Functions. It’s typically Cockroftian: understated, thoughtful, and full of insight drawn from experience.

Adrian makes a compelling case that the same technology drivers, faster networking and cheaper messaging, that drove the move to Microservices are now driving the move to Functions.

The payoffs are all those you’ve no doubt heard about Serverless for some time, but Adrian develops them in an interesting way. He traces how architectures have evolved over time. Take a look at my gloss of his talk for more details.

What’s next after Functions? Adrian talks about pushing Lambda functions to the edge. A topic I’m excited about and have been interested in for sometime, though I didn’t quite see it playing out like this.

Datacenters disappear. Functions are not running in an AWS region anymore, code is placed near the customer using a CDN at CDN endpoints. Now you have a fully distributed, at the edge, low latency, milliseconds from the customer way of running code. Now you can build architectures that are partly in the datacenter, partly at the edge, and partly at the customer premises. And since this is AWS, it’s all, of course, built around Lambda. AWS Greengrass and Snowball Edge are peeks into what the future might look like.

There’s a hidden tension here. Once you put code at the edge you violate two of Lambda’s key assumptions: functions are composed using scalable backend services; low latency messaging. The edge will have a high latency path back to services in the datacenter, so how do you make a function based distributed application at the edge? Does edge computing argue for a more retro architecture with fewer messages back to a more monolithic core?

Or does edge computing require something completely different? Here’s one thought as to what that something completely different might look like: Datanet: A New CRDT Database That Let's You Do Bad Bad Things To Distributed Data.

Now, let’s see the future by first taking a tour of the past….

From Monoliths, to Microservices, to Functions
Categories: Architecture

5 tips for building communities on mobile

Android Developers Blog - Mon, 03/27/2017 - 17:01
Posted by Dave Geffon, Partnerships Manager, Google Play Games

The most successful games usually have the strongest communities. They are a powerful force in driving additional engagement and increasing awareness for your titles. At GDC 2017, we spoke with a few game developers about best practices for successfully building their own communities. Watch the panel session below to hear advice from Seriously, Social Point, and Super Evil MegaCorp.


1. Be authentic

Community is a mindset; be honest, transparent & patient with your communications. Loyal users are extremely valuable, thus the folks at Super Evil Megacorp say that you should act like you have to earn every player.

2. Start small
Build a plan and start today. Launch your social media channels, look into influencers, and create a strategy. Whether it's sharing one piece of fan art a week across your network, or running a closed beta to gather feedback from your most valued users, take action and learn what works best for you and your users.
3. Play match-maker
When finding influencers to support your game, ensure they're a genuine match. Make sure the influencer's audience is a good fit with your game and existing community.
4. Seek feedback 
Communities are passionate. Use feedback to understand what kind of game and features your users want. Be flexible and iterative so you can react and evolve your game with the needs and desires of your community. However, don't be afraid to stay true to what you stand for as sometimes you'll need to agree to disagree with some players.
5. Build for the long-term
The lifespan of games is continuing to grow. Plan your business strategy, update cycles and community efforts to roll out over time and expand with your growing experiences and user-base.
Watch more sessions from Google Developer Day at GDC17 on the Android Developers YT channel to learn tips for success. Also, visit the Android Developers website to stay up-to-date with features and best practices that will help you grow a successful business on Google Play.

How useful did you find this blogpost?         

Categories: Programming

5 tips for launching successful apps and games on Google Play

Android Developers Blog - Mon, 03/27/2017 - 16:53
Posted by Adam Gutterman, Go-To-Market Strategic Lead, Google Play Games

Last month at the Game Developers Conference (GDC), we held a developer panel focused on sharing best practices for building successful app and game businesses. Check out 5 tips for developers, both large and small, as shared by our gaming partners at Electronic Arts (EA), Hutch Games, Nix Hydra, Space Ape Games and Omnidrone.



1. Test, test, test
The best time to test, is before you launch; so test boldly and test a lot! Nix Hydra recommends testing creative, including art style and messaging, as well as gameplay mechanics, onboarding flows and anything else you're not sure about. Gathering feedback from real users in advance of launching can highlight what's working and what can be improved to ensure your game's in the best shape possible at launch.
2. Store listing experiments
Run experiments on all of your store listing page assets. Taking bold risks instead of making assumptions allows you to see the impact of different variables with your actual user base on Google Play. Test in different regions to ensure your store listing page is optimized for each major market, as they often perform differently.

3. Early Access program

Space Ape Games recently used Early Access to test different onboarding experiences and gameplay control methods in their game. Finding the right combination led them to double-digit growth in D1 retention. Gathering these results in advance of launch helped the team fine tune and polish the game, minimizing risk before releasing to the masses.

"Early Access is cool because you can ask the big questions and get real answers from real players," Joe Raeburn, Founding Product Guy at Space Ape Games.
Watch the Android Developer Story below to hear how Omnidrone benefits from Early Access using strong user feedback to improve retention, engagement and monetization in their game.


Mobile game developer Omnidrone benefits from Early Access.
4. Pre-registration

Electronic Arts has run more than 5 pre-registration campaigns on Google Play. Pre-registration allows them to start marketing and build awareness for titles with a clear call-to-action before launch. This gives them a running start on launch day having built a group of users to activate upon the game's release resulting in a jump in D1 installs.

5. Seek feedback

All partners strongly recommended seeking feedback early and often. Feedback tells both sides of the story, by pointing out what's broken as well as what you're doing right. Find the right time and channels to request feedback, whether they be in-game, social, email, or even through reading and responding to reviews within the Google Play store.

If you're a startup who has an upcoming launch on Google Play or has launched an app or game recently and you're interested in opportunities like Early Access and pre-registration, get in touch with us so we can work with you.

Watch sessions from Google Developer Day at GDC17 on the Android Developers YT channel to learn tips for success. Also, visit the Android Developers website to stay up-to-date with features and best practices that will help you grow a successful business on Google Play.


How useful did you find this blogpost?         

Categories: Programming

SPaMCAST 435 – Allan Kelly, #NoProjects, Value

SPaMCAST Logo

http://www.spamcast.net

Listen Now
Subscribe on iTunes
Check out the podcast on Google Play Music

The Software Process and Measurement Cast 435 features our interview with Allan Kelly.  Our discussion touched on the concepts behind #NoProjects.  Allan describes how the concept of a project leads to a number of unintended consequences.  Those consequences aren’t pretty.

Allan makes digital development teams more effective and improves delivery with continuous agile approaches to reduce delay and risk while increasing value delivered. He helps teams and smaller companies – including start-ups and scale-ups – with advice, coaching and training. Managers, product, and technical staff are all involved in his improvements. He is the originator of Retrospective Dialogue Sheets and Value Poker, the author of four books, including “Xanpan – team-centric Agile Software Development” and “Business Patterns for Software Developers”. On Twitter he is @allankellynet.

Re-Read Saturday News

This week we tackle Chapter 8 of Carol Dweck’s Mindset: The New Psychology of Success (buy your copy and read along).  Chapter 8, titled “Changing Mindsets.” The whole concept of mindsets would be an interesting footnote if we did not believe they could change. Chapter 8 drives home the point that has been made multiple times in the book, that mindsets are malleable with self-awareness and a lot of effort. The question of whether all people want to be that self-aware will be addressed next week as we wrap up our re-read.

We are quickly closing in on the end of our re-read of Mindset.  I anticipate one more week.   The next book in the series will be Holacracy (Buy a copy today). After my recent interview with Jeff Dalton on Software Process and Measurement Cast 433, I realized that I had only read extracts from Holacracy by Brian J. Robertson, therefore we will read (first time for me) the whole book together.

Every week we discuss a chapter then consider the implications of what we have “read” from the point of view of both pursuing an organizational transformation and also using the material when coaching teams.  

Remember to buy a copy of Carol Dweck’s Mindset and start the re-read from the beginning!

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

Next SPaMCAST

The next Software Process and Measurement Cast will feature our essay on incremental change approaches.  We will also have columns from Jeremy Berriault. Jeremy blogs at https://jberria.wordpress.com/  and Jon M Quigley who brings his column, the Alpha and Omega of Product Development, to the Cast. One of the places you can find Jon is at Value Transformation LLC.

 


Categories: Process Management

SPaMCAST 435 - Allan Kelly, #NoProjects, Value

Software Process and Measurement Cast - Sun, 03/26/2017 - 22:00

The Software Process and Measurement Cast 435 features our interview with Allan Kelly.  Our discussion touched on the concepts behind #NoProjects.  Allan describes how the concept of a project leads to a number of unintended consequences.  Those consequences aren’t pretty.

Allan makes digital development teams more effective and improves delivery with continuous agile approaches to reduce delay and risk while increasing value delivered. He helps teams and smaller companies - including start-ups and scale-ups - with advice, coaching and training. Managers, product and technical staff are all involved in his improvements. He is the originator of Retrospective Dialogue Sheets and Value Poker, the author of four books, including "Xanpan - team-centric Agile Software Development" and "Business Patterns for Software Developers". On Twitter he is @allankellynet.

Re-Read Saturday News

This week we tackle Chapter 8 of Carol Dweck’s Mindset: The New Psychology of Success (buy your copy and read along).  Chapter 8, titled “Changing Mindsets.” The whole concept of mindsets would be an interesting footnote if we did not believe they could change. Chapter 8 drives home the point that has been made multiple times in the book, that mindsets are malleable with self-awareness and a lot of effort. The question of whether all people want to be that self-aware will be addressed next week as we wrap up our re-read.

We are quickly closing in on the end of our re-read of Mindset.  I anticipate one more week.   The next book in the series will be Holacracy (Buy a copy today). After my recent interview with Jeff Dalton on Software Process and Measurement Cast 433, I realized that I had only read extracts from Holacracy by Brian J. Robertson, therefore we will read (first time for me) the whole book together.

Every week we discuss a chapter then consider the implications of what we have “read” from the point of view of both someone pursuing an organizational transformation and using the material when coaching teams.  

Remember to buy a copy of Carol Dweck’s Mindset and start the re-read from the beginning!

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

Next SPaMCAST

The next Software Process and Measurement Cast will feature our essay on incremental change approaches.  We will also have columns from Jeremy Berriault. Jeremy blogs at https://jberria.wordpress.com/  and Jon M Quigley who brings his column, the Alpha and Omega of Product Development, to the Cast. One of the places you can find Jon is at Value Transformation LLC.

 

Categories: Process Management

Mindset: The New Psychology of Success: Re-Read Week 9, Chapter 8, Changing Mindsets: A Workshop

Mindset Book Cover

Next week we will complete our re-read of Mindset with a round-up and some thoughts on using the concepts in this book in a wholesale manner.  The next book in the series will be Holacracy.  Buy a copy today and read along!  I have had a couple of questions about why did not do a poll for this re-read.  As I noted last week, after my recent interview with Jeff Dalton on Software Process and Measurement Cast 433, I realized that I had only read extracts from Holacracy by Brian J. Robertson.  I think many of us are looking for an organizational paradigm for Agile organizations.  Hierarchies and matrix organizations have clear and immediate drawbacks.  Holacracy might be one tool to address this problem, which why we will read this book.

One more thing — If you are going to be at QAI Quest 2017 April 3 – 7, please come hear me speak and track me down for a coffee or adult beverage and we can talk shop! 

Chapter 8: Changing Mindsets

The whole concept of mindsets would be an interesting footnote if we did not believe they could change. Chapter 8 drives home the point that has been made multiple times in the book, that mindsets are malleable with self-awareness and a lot of effort. The question of whether all people want to be that self-aware will be addressed next week as we wrap up our re-read.

Dr. Dweck opens the chapter by using the metaphor of surgery to illustrate why change is difficult. For example, if you have a wart a doctor will freeze it or cut it off.  It is gone.  Old behaviors don’t lend themselves to surgical removal. They are always still lurking in the background and can come back.  They are never excised.  If we wanted a medical metaphor, behaviors are more like the virus that causes shingles which enters the body as chicken pox, runs its course and then lingers forever after to potentially reemerge over and over (PS – get the vaccination).  When I was young, I smoked.  I don’t know how many times I quit only to relapse.  Every time I relapsed I knew shouldn’t buy that pack or bum a smoke but did it anyway.  If I had branded myself as weak I would have never I climbed back on the wagon and learned from the triggering event. Dweck points out that our mind is always keeping track and interpreting, keeping a running account, of our actions based on our mindset. That accounting process can be the difference between see not meeting a goal such smoking cessation as a learning even or being branded a failure.  Our mindsets generate internal dialogs that can empower or “unpower” (I made up this word).  A growth mindset generates a different internal dialog and a fixed mindsets fill in the internal dialog.   A growth mindset looks for the learning opportunity

Mindsets are not fixed.  In studies presented in Chapter 8, just learning that you have or lean toward a fixed mindset can cause change.  The act of learning provides knowledge that can be helpful to confront the self-destructive behaviors at the heart of a fixed mindset.  Knowing is not always a sufficient mechanism for change.

Chapter 8 provides insights into several academic and commercial approaches Dweck has used to affect change.  The common thread in all of the effective approaches outlined in the chapter is a belief that you are in charge of your mind and your mind can grow (metaphorically).  Change however is difficult.  In order to change an individual has to be able to give up their current self-image and replace the self-image.  Replacing your self-image is frightening.  You have to give up something known and replace it with something else that might sound better but that you have no experience with.  

The process of changing from a fixed to a growth mindset begins with making a “vivid, concrete, growth-oriented plan” that includes specific”when, where, and how” components.  Execution needs to be coupled with feedback, support and mentoring.  None of the good techniques and examples provided will suffer from willpower and the ability to learn from feedback.

Using Mindsets:

Organizational Transformation:  Mindsets provide a tool for considering how organization transformation will be perceived and the predicting the unintended consequences of change. For example, if an organization was trying to shift from a risk-adverse culture to a more innovative culture messaging would tend to focus on growth opportunities, failing fast and learning.  To people within the organization with growth mindsets, these concepts would make sense and be easily absorbed (assuming the organization’s actions supported the words for the most part).  However, those with a fixed mindset (potentially some key players and top individual performers) would first need to recognize that their behavior has to change.  The organization would need to actively provide growth plans to support their transition.  Using mindsets in organizational transformation plans is using for change management, messaging and risk planning. At an organizational level, using mindsets in planning is an important though exercise that can guide other activities including team level coaching.

Team Coaching: The sentence, “a vivid, concrete, growth-oriented plan” reflects one of the more important tactical realities that must be remembered when using mindsets at a team or personal level.  Teams and organizations don’t change, it is all about the people.  Individual people change which then influence the team or organization.  Coaches need should begin any team coaching activities by targeting leaders/influencers.  Think of the game Jenga, game pieces are removed until the key piece is exposed and when removed the tower falls.  Transforming a team is much akin to anti-Jenga.  The goal is to find the critical piece and help them to change. Change requires self-realization, a plan, effort and support.    

Previous Entries of the re-read of Mindset:

Basics and Introduction

Chapter 1: Mindsets

Chapter 2: Inside the Mindsets

Chapter 3: The Truth About Ability and Accomplishment

Chapter 4: Sports: The Mindset of a Champion

Chapter 5: Business: Mindset and Leadership

Chapter 6: Relationships: Mindsets in Love (or Not)

Chapter 7: Parents, Teachers, Coaches: Where Do Mindsets Come From?

 


Categories: Process Management

Luigi: An ExternalProgramTask example – Converting JSON to CSV

Mark Needham - Sat, 03/25/2017 - 15:09

I’ve been playing around with the Python library Luigi which is used to build pipelines of batch jobs and I struggled to find an example of an ExternalProgramTask so this is my attempt at filling that void.

Luigi - the Python data library for building data science pipelines

I’m building a little data pipeline to get data from the meetup.com API and put it into CSV files that can be loaded into Neo4j using the LOAD CSV command.

The first task I created calls the /groups endpoint and saves the result into a JSON file:

import luigi
import requests
import json
from collections import Counter

class GroupsToJSON(luigi.Task):
    key = luigi.Parameter()
    lat = luigi.Parameter()
    lon = luigi.Parameter()

    def run(self):
        seed_topic = "nosql"
        uri = "https://api.meetup.com/2/groups?&topic={0}&lat={1}&lon={2}&key={3}".format(seed_topic, self.lat, self.lon, self.key)

        r = requests.get(uri)
        all_topics = [topic["urlkey"]  for result in r.json()["results"] for topic in result["topics"]]
        c = Counter(all_topics)

        topics = [entry[0] for entry in c.most_common(10)]

        groups = {}
        for topic in topics:
            uri = "https://api.meetup.com/2/groups?&topic={0}&lat={1}&lon={2}&key={3}".format(topic, self.lat, self.lon, self.key)
            r = requests.get(uri)
            for group in r.json()["results"]:
                groups[group["id"]] = group

        with self.output().open('w') as groups_file:
            json.dump(list(groups.values()), groups_file, indent=4, sort_keys=True)

    def output(self):
        return luigi.LocalTarget("/tmp/groups.json")

We define a few parameters at the top of the class which will be passed in when this task is executed. The most interesting lines of the run function are the last couple where we write the JSON to a file. self.output() refers to the target defined in the output function which in this case is /tmp/groups.json.

Now we need to create a task to convert that JSON file into CSV format. The jq command line tool does this job well so we’ll use that. The following task does the job:

from luigi.contrib.external_program import ExternalProgramTask

class GroupsToCSV(luigi.contrib.external_program.ExternalProgramTask):
    file_path = "/tmp/groups.csv"
    key = luigi.Parameter()
    lat = luigi.Parameter()
    lon = luigi.Parameter()

    def program_args(self):
        return ["./groups.sh", self.input()[0].path, self.output().path]

    def output(self):
        return luigi.LocalTarget(self.file_path)

    def requires(self):
        yield GroupsToJSON(self.key, self.lat, self.lon)

groups.sh

#!/bin/bash

in=${1}
out=${2}

echo "id,name,urlname,link,rating,created,description,organiserName,organiserMemberId" > ${out}
jq -r '.[] | [.id, .name, .urlname, .link, .rating, .created, .description, .organizer.name, .organizer.member_id] | @csv' ${in} >> ${out}

I wanted to call jq directly from the Python code but I couldn’t figure out how to do it so putting that code in a shell script is my workaround.

The last piece of the puzzle is a wrapper task that launches the others:

import os

class Meetup(luigi.WrapperTask):
    def run(self):
        print("Running Meetup")

    def requires(self):
        key = os.environ['MEETUP_API_KEY']
        lat = os.getenv('LAT', "51.5072")
        lon = os.getenv('LON', "0.1275")

        yield GroupsToCSV(key, lat, lon)

Now we’re ready to run the tasks:

$ PYTHONPATH="." luigi --module blog --local-scheduler Meetup
DEBUG: Checking if Meetup() is complete
DEBUG: Checking if GroupsToCSV(key=xxx, lat=51.5072, lon=0.1275) is complete
INFO: Informed scheduler that task   Meetup__99914b932b   has status   PENDING
DEBUG: Checking if GroupsToJSON(key=xxx, lat=51.5072, lon=0.1275) is complete
INFO: Informed scheduler that task   GroupsToCSV_xxx_51_5072_0_1275_e07372cebf   has status   PENDING
INFO: Informed scheduler that task   GroupsToJSON_xxx_51_5072_0_1275_e07372cebf   has status   PENDING
INFO: Done scheduling tasks
INFO: Running Worker with 1 processes
DEBUG: Asking scheduler for work...
DEBUG: Pending tasks: 3
INFO: [pid 4452] Worker Worker(salt=970508581, workers=1, host=Marks-MBP-4, username=markneedham, pid=4452) running   GroupsToJSON(key=xxx, lat=51.5072, lon=0.1275)
INFO: [pid 4452] Worker Worker(salt=970508581, workers=1, host=Marks-MBP-4, username=markneedham, pid=4452) done      GroupsToJSON(key=xxx, lat=51.5072, lon=0.1275)
DEBUG: 1 running tasks, waiting for next task to finish
INFO: Informed scheduler that task   GroupsToJSON_xxx_51_5072_0_1275_e07372cebf   has status   DONE
DEBUG: Asking scheduler for work...
DEBUG: Pending tasks: 2
INFO: [pid 4452] Worker Worker(salt=970508581, workers=1, host=Marks-MBP-4, username=markneedham, pid=4452) running   GroupsToCSV(key=xxx, lat=51.5072, lon=0.1275)
INFO: Running command: ./groups.sh /tmp/groups.json /tmp/groups.csv
INFO: [pid 4452] Worker Worker(salt=970508581, workers=1, host=Marks-MBP-4, username=markneedham, pid=4452) done      GroupsToCSV(key=xxx, lat=51.5072, lon=0.1275)
DEBUG: 1 running tasks, waiting for next task to finish
INFO: Informed scheduler that task   GroupsToCSV_xxx_51_5072_0_1275_e07372cebf   has status   DONE
DEBUG: Asking scheduler for work...
DEBUG: Pending tasks: 1
INFO: [pid 4452] Worker Worker(salt=970508581, workers=1, host=Marks-MBP-4, username=markneedham, pid=4452) running   Meetup()
Running Meetup
INFO: [pid 4452] Worker Worker(salt=970508581, workers=1, host=Marks-MBP-4, username=markneedham, pid=4452) done      Meetup()
DEBUG: 1 running tasks, waiting for next task to finish
INFO: Informed scheduler that task   Meetup__99914b932b   has status   DONE
DEBUG: Asking scheduler for work...
DEBUG: Done
DEBUG: There are no more tasks to run at this time
INFO: Worker Worker(salt=970508581, workers=1, host=Marks-MBP-4, username=markneedham, pid=4452) was stopped. Shutting down Keep-Alive thread
INFO: 
===== Luigi Execution Summary =====

Scheduled 3 tasks of which:
* 3 ran successfully:
    - 1 GroupsToCSV(key=xxx, lat=51.5072, lon=0.1275)
    - 1 GroupsToJSON(key=xxx, lat=51.5072, lon=0.1275)
    - 1 Meetup()

This progress looks 🙂 because there were no failed tasks or missing external dependencies

===== Luigi Execution Summary =====

Looks good! Let’s quickly look at our CSV file:

$ head -n10 /tmp/groups.csv 
id,name,urlname,link,rating,created,description,organiserName,organiserMemberId
1114381,"London NoSQL, MySQL, Open Source Community","london-nosql-mysql","https://www.meetup.com/london-nosql-mysql/",4.28,1208505614000,"

Meet others in London interested in NoSQL, MySQL, and Open Source Databases.

","Sinead Lawless",185675230 1561841,"Enterprise Search London Meetup","es-london","https://www.meetup.com/es-london/",4.66,1259157419000,"

Enterprise Search London is a meetup for anyone interested in building search and discovery experiences — from intranet search and site search, to advanced discovery applications and beyond.

Disclaimer: This meetup is NOT about SEO or search engine marketing.

What people are saying:

  • ""Join this meetup if you have a passion for enterprise search and user experience that you would like to share with other able-minded practitioners."" — Vegard Sandvold
  • ""Full marks for vision and execution. Looking forward to the next Meetup."" — Martin White
  • “Consistently excellent” — Helen Lippell

Sweet! And what if we run it again?

$ PYTHONPATH="." luigi --module blog --local-scheduler Meetup
DEBUG: Checking if Meetup() is complete
INFO: Informed scheduler that task   Meetup__99914b932b   has status   DONE
INFO: Done scheduling tasks
INFO: Running Worker with 1 processes
DEBUG: Asking scheduler for work...
DEBUG: Done
DEBUG: There are no more tasks to run at this time
INFO: Worker Worker(salt=172768377, workers=1, host=Marks-MBP-4, username=markneedham, pid=4531) was stopped. Shutting down Keep-Alive thread
INFO: 
===== Luigi Execution Summary =====

Scheduled 1 tasks of which:
* 1 present dependencies were encountered:
    - 1 Meetup()

Did not run any tasks
This progress looks 🙂 because there were no failed tasks or missing external dependencies

===== Luigi Execution Summary =====

As expected nothing happens since our dependencies are already satisfied and we have our first Luigi pipeline up and running.

The post Luigi: An ExternalProgramTask example – Converting JSON to CSV appeared first on Mark Needham.

Categories: Programming

Stuff The Internet Says On Scalability For March 24th, 2017

Hey, it's HighScalability time:

 This is real and oh so eerie. Custom microscope takes a 33 hour time lapse of a tadpole egg dividing.
If you like this sort of Stuff then please support me on Patreon.
  • 40Gbit/s: indoor optical wireless networks; 15%: energy produced by wind in Europe; 5: new tasty particles; 2000: Qubits are easy; 30 minutes: flight time for electric helicopter; 42.9%: of heathen StackOverflowers prefer tabs;

  • Quotable Quotes:
    • @RichRogersIoT: "Did you know? The collective noun for a group of programmers is a merge-conflict." - @omervk
    • @tjholowaychuk: reviewed my dad's company AWS expenses, devs love over-provisioning, by like 90% too, guess that's where "serverless" cost savings come in
    • @karpathy: Nature is evolving ~7 billion ~10 PetaFLOP NI agents in parallel, and has been for ~10M+s of years, in a very realistic simulator. Not fair.
    • @rbranson: This is funny, but legit. Production software tends to be ugly because production is ugly. The ugliness outpaces our ability to abstract it.
    • @joeweinman: @harrietgreen1 : Watson IoT center opened in Munich... $200 million dollar investment; 1000 engineers #ibminterconnect
    • David Gerard: This [IBM Blockchain Service] is bollocks all the way down.
    • digi_owl: Sometimes it seems that the diff between a CPU and a cluster is the suffix put on the latency times.
    • Scott Aaronson: I’m at an It from Qubit meeting at Stanford, where everyone is talking about how to map quantum theories of gravity to quantum circuits acting on finite sets of qubits, and the questions in quantum circuit complexity that are thereby raised.
    • Founder Collective: Firebase didn’t try to do everything at once. Instead, they focused on a few core problems and executed brilliantly. “We built a nice syntax with sugar on top,” says Tamplin. “We made real-time possible and delightful.” It is a reminder that entrepreneurs can rapidly add value to the ecosystem if they really focus.
    • Elizabeth Kolbert: Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups. 
    • Western Union: the ‘telephone’ has too many shortcomings to be seriously considered as a means of communication.
    • Arthur Doskow: being fair, being humane may cost money. And this is the real issue with many algorithms. In economists’ terms, the inhumanity associated with an algorithm could be referred to as an externality. 
    • Francis: The point is that even if GPUs will support lower precision data types exclusively for AI, ML and DNN, they will still carry the big overhead of the graphics pipeline, hence lower efficiency than an FPGA (in terms of FLOPS/WATT). The winner? Dedicated AI processors, e.g. Google TPU
    • James Glasnapp: When we move out of the physical space to a technological one, how is the concept of a “line” assessed by the customer who can’t actually see the line? 
    • Frank: On the other hand, if institutionalized slavery still existed, factories would be looking at around $7,500 in annual costs for housing, food and healthcare per “worker”.
    • Baron Schwartz: If anyone thought that NoSQL was just a flare-up and it’s died down now, they were wrong...In my opinion, three important areas where markets aren’t being satisfied by relational technologies are relational and SQL backwardness, time series, and streaming data. 
    • CJefferson: The problem is, people tell me that if I just learn Haskell, Idris, Closure, Coffescript, Rust, C++17, C#, F#, Swift, D, Lua, Scala, Ruby, Python, Lisp, Scheme, Julia, Emacs Lisp, Vimscript, Smalltalk, Tcl, Verilog, Perl, Go... then I'll finally find 'programming nirvana'.
    • @spectatorindex: Scientists had to delete Urban Dictionary's data from the memory of IBM's Watson, because it was learning to swear in its answers.
    • Animats: [Homomorphically Encrypted Deep Learning] is a way for someone to run a trained network on their own machine without being able to extract the parameters of the network. That's DRM.
    • Dino Dai Zovi: Attackers will take the least cost path through an attack graph from their start node to their goal node.
    • @hshaban: JUST IN: Senate votes to repeal web privacy rules, allowing broadband providers to sell customer data w/o consent including browsing history
    • KBZX5000: The biggest problem you face, as a student, when taking a programming course at a University level, is that the commercially applicable part of it is very limited in scope. You tend to become decent at writhing algorithms. A somewhat dubious skill, unless you are extremely gifted in mathematics and / or somehow have access to current or unique hardware IP's (IP as in Intellectual Property).
    • Brian Bailey: The increase in complexity of the power delivery network (PDN) is starting to outpace increases in functional complexity, adding to the already escalating costs of modern chips. With no signs of slowdown, designers have to ensure that overdesign and margining do not eat up all of the profit margin.
    • rbanffy: Those old enough will remember the AS/400 (now called iSeries) computers map all storage to a single address space. You had no disk - you had just an address space that encompassed everything and an OS that dealt with that.
    • @disruptivedean: Biggest source of latency in mobile networks isn't milliseconds in core, it's months or years to get new cell sites / coverage installed
    • Greg Ferro: Why Is 40G Ethernet Obsolete? Short Answer: COST. The primary issue is that 40G Ethernet uses 4x10G signalling lanes. On UTP, 40G uses 4 pairs at 10G each. 
    • @adriaanm: "We chose Scala as the language because we wanted the latest features of Spark, as well as [...] types, closures, immutability [...]"Adriaan Moors added,
    • ajamesm: There's a difference between (A) locking (waiting, really) on access to a critical section (where you spinlock, yield your thread, etc.) and (B) locking the processor to safely execute a synchronization primitive (mutexes/semaphores).
    • @evan2645: "Chaos doesn't cause problems, it reveals them" - @nora_js #SREcon17Americas #SRECon17
    • chrissnell: We've been running large ES clusters here at Revinate for about four years now. I've found the sweet spot to be about 14-16 data nodes, plus three master-only nodes. Right now, we're running them under OpenStack on top of our own bare metal with SAS disks. It works well but I have been working on a plan to migrate them to live under Kubernetes like the rest of our infrastructure. I think the answer is to put them in StatefulSets with local hostPath volumes on SSD.
    • @beaucronin: Major recurring theme of deep learning twitter is how even those 100% dedicated to the field can't keep up with progress.
    • Chris McNab: VPN certificates and keys are often found within and lifted from email, ticketing, and chat services.
    • @bodil: And it took two hours where the Rust version has taken three days and I'm still not sure it works.
    • azirbel: One thing that's generalizable (though maybe obvious) is to explicitly define the SLAs for each microservice. There were a few weeks where we gave ourselves paging errors every time a smaller service had a deploy or went down due to unimportant errors.
    • bigzen: I'm worn out on articles dissing the performance of SQL databases without quoting any hard numbers and then proceeding to replace the systems with no thanks of development in the latest and great tech. I have nothing against spark, but I find it very hard to believe that alarm code is now readable than SQL. In fact, my experience is just the opposite.
    • jhgg: We are experimenting with webworkers to power a very complicated autocomplete and scoring system in our client. So far so good. We're able to keep the UI running at 60fps while we match, score and sort results in a web-worker.
    • DoubleGlazing: NoSQL doesn't reduce development effort. What you gain from not having to worry about modifying schemas and enforcing referential integrity, you lose from having to add more code to your app to check that a DB document has a certain value. In essence you are moving responsibility for data integrity away from the DB and in to your app, something I think is quite dangerous.
    • Const-me: Too bad many computer scientists who write books about those algorithms prefer to view RAM in an old-fashioned way, as fast and byte-addressable.
    • Azur: It always annoys me a bit when tardigrades are described as extremely hardy: they are not. It is ONLY in the desiccated, cryptobiotic, form they are resistant to adverse conditions.
    • rebootthesystem: Hardware engineers can design FPGA-based hardware optimized for ML. A second set of engineers then uses these boards/FPGA's just as they would GPU's. They write code in whatever language to use them as ML co-processors. This second group doesn't have to be composed of hardware engineers. Today someone using a GPU doesn't have to be a hardware engineer who knows how to design a GPU. Same thing.

  • There should be some sort of Metcalfe's law for events. Maybe: the value of a platform is proportional to the square of the number of scriptable events emitted by unconnected services in the system. CloudWatch Events Now Supports AWS Step Functions as a Target@ben11kehoe: This is *really* useful: Automate your incident response processes with bulletproof state machines #aws

  • Cute faux O'Reilly book cover. Solving Imaginary Scaling Issues.

  • Intel's Optane SSD is finally out, though not quite meeting it's initial this will change everything promise, it still might change a lot of things. Intel’s first Optane SSD: 375GB that you can also use as RAM. 10x DRAM latency. 1/1000 NAND latency. 2400MB/s read, 2000MB/s write. 30 full-drive writes per day. 2.5x better density. $4/GB (1/2 RAM cost). 1.5TB capacity. 500k mixed random IOPS. Great random write response. Targeted at power users with big files, like databases. NDAs are still in place so there's more to learn later. PCPerspective: comparing a server with 768GB of DRAM to one with 128GB of DRAM combined with a pair of P4800X's, 80% of the transactions per second were possible (with 1/6th of the DRAM). More impressive was that matrix multiplication of the data saw a 1.1x *increase* in performance. This seems impossible, as Optane is still slower than DRAM, but the key here was that in the case of the DRAM-only configuration, half of the database was hanging off of the 'wrong' CPU.  foboz1: For anyone think that this a solution looking for a problem, think about two things: Big Data and mobile/embedded. Big Data has an endless appetite for large quantities for memory and fast storage; 3D XPoint plays into the memory hierarchy nicely. At the extreme other end of the scale, it may be fast enough to obviate the need for having DRAM+NAND in some applications. raxx7: And 3D XPoint isn't free of limitations yet. RAM has 50-100 ns latency, 50 GB/s bandwidth (128 bit interface) and unlimited write endurance. If 3D XPoint NVDIMM can't deliver this, we'll still need to manage the difference between RAM and 3D XPoint NVDIMM. zogus: The real breakthrough will come, I think, when the OS and applications are re-written so that they no longer assume that a computer's memory consists of a small, fast RAM bank and a huge, slow persistent set of storage--a model that had held true since just about forever. VertexMaster: Given that DRAM is currently an order of magnitude faster (and several orders vs this real-world x-point product) I really have a hard time seeing where this fits in. sologoub: we built a system using Druid as the primary store of reporting data. The setup worked amazingly well with the size/cardinality of the data we had, but was constantly bottlenecked at paging segments in and out of RAM. Economically, we just couldn't justify a system with RAM big enough to hold the primary dataset...I don't have access to the original planning calculations anymore, but 375GB at $1520 would definitely have been a game changer in terms of performance/$, and I suspect be good enough to make the end user feel like the entire dataset was in memory.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Thunderbolting Your Video Card

Coding Horror - Jeff Atwood - Fri, 03/24/2017 - 10:08

When I wrote about The Golden Age of x86 Gaming, I implied that, in the future, it might be an interesting, albeit expensive, idea to upgrade your video card via an external Thunderbolt 3 enclosure.

I'm here to report that the future is now.

Yes, that's right, I paid $500 for an external Thunderbolt 3 enclosure to fit a $600 video card, all to enable a plug-in upgrade of a GPU on a Skull Canyon NUC that itself cost around $1000 fully built. I know, it sounds crazy, and … OK fine, I won't argue with you. It's crazy.

This matters mostly because of 4k, aka 2160p, aka 3840 × 2160, aka Ultra HD.

4k compared to 1080p

Plain old regular HD, aka 1080p, aka 1920 × 1080, is one quarter the size of 4k, and ¼ the work. By today's GPU standards HD is pretty much easy mode these days. It's not even interesting. No offense to console fans, or anything.

Late in 2016, I got a 4k OLED display and it … kind of blew my mind. I have never seen blacks so black, colors so vivid, on a display so thin. It made my previous 2008 era Panasonic plasma set look lame. It's so good that I'm now a little angry that every display that my eyes touch isn't OLED already. I even got into nerd fights over it, and to be honest, I'd still throw down for OLED. It is legitimately that good. Come at me, bro.

Don't believe me? Well, guess which display in the below picture is OLED? Go on, guess:

Guess which screen is OLED?

@andrewbstiles if it was physically possible to have sex with this TV I.. uh.. I'd take it on long, romantic walks

— Jeff Atwood (@codinghorror) August 13, 2016

There's a reason every site that reviews TVs had to recalibrate their results when they reviewed the 2016 OLED sets.

In my extended review at Reference Home Theater, I call it “the best looking TV I’ve ever reviewed.” But we aren’t alone in loving the E6. Vincent Teoh at HDTVtest writes, “We’re not even going to qualify the following endorsement: if you can afford it, this is the TV to buy.” Rtings.com gave the E6 OLED the highest score of any TV the site has ever tested. Reviewed.com awarded it a 9.9 out of 10, with only the LG G6 OLED (which offers the same image but better styling and sound for $2,000 more) coming out ahead.

But I digress.

Playing games at 1080p in my living room was already possible. But now that I have an incredible 4k display in the living room, it's a whole other level of difficulty. Not just twice as hard – and remember current consoles barely manage to eke out 1080p at 30fps in most games – but four times as hard. That's where external GPU power comes in.

The cool technology underpinning all of this is Thunderbolt 3. The thunderbolt cable bundled with the Razer Core is rather … diminutive. There's a reason for this.

Is there a maximum cable length for Thunderbolt 3 technology?

Thunderbolt 3 passive cables have maximum lengths.

  • 0.5m TB 3 (40Gbps)
  • 1.0m TB 3 (20Gbps)
  • 2.0m TB 3 (20Gbps)

In the future we will offer active cables which will provide 40Gbps of bandwidth at longer lengths.

40Gbps is, for the record, an insane amount of bandwidth. Let's use our rule of thumb based on ultra common gigabit ethernet, that 1 gigabit = 120 megabytes/second, and we arrive at 4.8 gigabytes/second. Zow.

That's more than enough bandwidth to run even the highest of high end video cards, but it is not without overhead. There's a mild performance hit for running the card externally, on the order of 15%. There's also a further performance hit of 10% if you are in "loopback" mode on a laptop where you don't have an external display, so the video frames have to be shuttled back from the GPU to the internal laptop display.

This may look like a gamer-only thing, but surprisingly, it isn't. What you get is the general purpose ability to attach any PCI express card to any computer with a Thunderbolt 3 port and, for the most part, it just works!

Linus breaks it down and answers all your most difficult questions:

Please watch the above video closely if you're actually interested in this stuff; it is essential. I'll add some caveats of my own after working with the Razer Core for a while:

  • Make sure the video card you plan to put into the Razer Core is not too tall, or too wide. You can tell if a card is going to be too tall by looking at pictures of the mounting rear bracket. If the card extends significantly above the standard rear mounting bracket, it won't fit. If the card takes more than 2 slots in width, it also won't fit, but this is more rare. Depth (length) is rarely an issue.

  • There are four fans in the Razer Core and although it is reasonably quiet, it's not super silent or anything. You may want to mod the fans. The Razer Core is a remarkably simple device, internally, it's really just a power supply, some Thunderbolt 3 bridge logic, and a PCI express slot. I agree with Linus that the #1 area Razer could improve in the future, beyond generally getting the price down, is to use fewer and larger fans that run quieter.

  • If you're putting a heavy hitter GPU in the Razer Core, I'd try to avoid blower style cards (the ones that exhaust heat from the rear) in favor of those that cool with large fans blowing down and around the card. Dissipating 150w+ is no mean feat and you'll definitely need to keep the enclosure in open air … and of course within 0.5 meters of the computer it's connected to.

  • There is no visible external power switch on the Razer Core. It doesn't power on until you connect a TB3 cable to it. I was totally not expecting that. But once connected, it powers up and the Windows 10 Thunderbolt 3 drivers kick in and ask you to authorize the device, which I did (always authorize). Then it spun a bit, detected the new GPU, and suddenly I had multiple graphics card active on the same computer. I also installed the latest Nvidia drivers just to make sure everything was ship shape.

  • It's kinda ... weird having multiple GPUs simultaneously active. I wanted to make the Razer Core display the only display, but you can't really turn off the built in GPU – you can select "only use display 2", that's all. I got into several weird states where windows were opening on the other display and I had to mess around a fair bit to get things locked down to just one display. You may want to consider whether you have both "displays" connected for troubleshooting, or not.

And then, there I am, playing Lego Marvel in splitscreen co-op at glorious 3840 × 2160 UltraHD resolution on an amazing OLED display with my son. It is incredible.

Beyond the technical "because I could", I am wildly optimistic about the future of external Thunderbolt 3 expansion boxes, and here's why:

  • The main expense and bottleneck in any stonking gaming rig is, by far, the GPU. It's also the item you are most likely to need to replace a year or two from now.

  • The CPU and memory speeds available today are so comically fast that any device with a low-end i3-7100 for $120 will make zero difference in real world gaming at 1080p or higher … if you're OK with 30fps minimum. If you bump up to $200, you can get a quad-core i5-7500 that guarantees you 60fps minimum everywhere.

  • If you prefer a small system or a laptop, an external GPU makes it so much more flexible. Because CPU and memory speeds are already so fast, 99.9% of the time your bottleneck is the GPU, and almost any small device you can buy with a Thunderbolt 3 port can now magically transform into a potent gaming rig with a single plug. Thunderbolt 3 may be a bit cutting edge today, but more and more devices are shipping with Thunderbolt 3. Within a few years, I predict TB3 ports will be as common as USB3 ports.

  • A general purpose external PCI express enclosure will be usable for a very long time. My last seven video card upgrades were plug and play PCI Express cards that would have worked fine in any computer I've built in the last ten years.

  • External GPUs are not meaningfully bottlenecked by Thunderbolt 3 bandwidth; the impact is 15% to 25%, and perhaps even less over time as drivers and implementations mature. While Thunderbolt 3 has "only" PCI Express x4 bandwidth, many benchmarkers have noted that GPUs moving from PCI Express x16 to x8 has almost no effect on performance. And there's always Thunderbolt 4 on the horizon.

The future, as they say, is already here – it's just not evenly distributed.

I am painfully aware that costs need to come down. Way, way down. The $499 Razer Core is well made, on the vanguard of what's possible, a harbinger of the future, and fantastically enough, it does even more than what it says on the tin. But it's not exactly affordable.

I would absolutely love to see a modest, dedicated $200 external Thunderbolt 3 box that included an inexpensive current-gen GPU. This would clobber any onboard GPU on the planet. Let's compare my Skull Canyon NUC, which has Intel's fastest ever, PS4 class embedded GPU, with the modest $150 GeForce GTX 1050 Ti:

1920 × 1080 high detail Bioshock Infinite15 → 79 fps Rise of the Tomb Raider12 → 49 fps Overwatch43 → 114 fps

As predicted, that's a 3x-5x stompdown. Mac users lamenting their general lack of upgradeability, hear me: this sort of box is exactly what you want and need. Imagine if Apple was to embrace upgrading their laptops and all-in-one systems via Thunderbolt 3.

I know, I know. It's a stretch. But a man can dream … of externally upgradeable GPUs. That are too expensive, sure, but they are here, right now, today. They'll only get cheaper over time.

[advertisement] Find a better job the Stack Overflow way - what you need when you need it, no spam, and no scams.
Categories: Programming

Thunderbolting Your Video Card

Coding Horror - Jeff Atwood - Fri, 03/24/2017 - 10:08

When I wrote about The Golden Age of x86 Gaming, I implied that, in the future, it might be an interesting, albeit expensive, idea to upgrade your video card via an external Thunderbolt 3 enclosure.

I'm here to report that the future is now.

Yes, that's right, I paid $500 for an external Thunderbolt 3 enclosure to fit a $600 video card, all to enable a plug-in upgrade of a GPU on a Skull Canyon NUC that itself cost around $1000 fully built. I know, it sounds crazy, and … OK fine, I won't argue with you. It's crazy.

This matters mostly because of 4k, aka 2160p, aka 3840 × 2160, aka Ultra HD.

4k compared to 1080p

Plain old regular HD, aka 1080p, aka 1920 × 1080, is one quarter the size of 4k, and ¼ the work. By today's GPU standards HD is pretty much easy mode these days. It's not even interesting. No offense to console fans, or anything.

Late in 2016, I got a 4k OLED display and it … kind of blew my mind. I have never seen blacks so black, colors so vivid, on a display so thin. It made my previous 2008 era Panasonic plasma set look lame. It's so good that I'm now a little angry that every display that my eyes touch isn't OLED already. I even got into nerd fights over it, and to be honest, I'd still throw down for OLED. It is legitimately that good. Come at me, bro.

Don't believe me? Well, guess which display in the below picture is OLED? Go on, guess:

Guess which screen is OLED?

@andrewbstiles if it was physically possible to have sex with this TV I.. uh.. I'd take it on long, romantic walks

— Jeff Atwood (@codinghorror) August 13, 2016

There's a reason every site that reviews TVs had to recalibrate their results when they reviewed the 2016 OLED sets.

In my extended review at Reference Home Theater, I call it “the best looking TV I’ve ever reviewed.” But we aren’t alone in loving the E6. Vincent Teoh at HDTVtest writes, “We’re not even going to qualify the following endorsement: if you can afford it, this is the TV to buy.” Rtings.com gave the E6 OLED the highest score of any TV the site has ever tested. Reviewed.com awarded it a 9.9 out of 10, with only the LG G6 OLED (which offers the same image but better styling and sound for $2,000 more) coming out ahead.

But I digress.

Playing games at 1080p in my living room was already possible. But now that I have an incredible 4k display in the living room, it's a whole other level of difficulty. Not just twice as hard – and remember current consoles barely manage to eke out 1080p at 30fps in most games – but four times as hard. That's where external GPU power comes in.

The cool technology underpinning all of this is Thunderbolt 3. The thunderbolt cable bundled with the Razer Core is rather … diminutive. There's a reason for this.

Is there a maximum cable length for Thunderbolt 3 technology?

Thunderbolt 3 passive cables have maximum lengths.

  • 0.5m TB 3 (40Gbps)
  • 1.0m TB 3 (20Gbps)
  • 2.0m TB 3 (20Gbps)

In the future we will offer active cables which will provide 40Gbps of bandwidth at longer lengths.

40Gbps is, for the record, an insane amount of bandwidth. Let's use our rule of thumb based on ultra common gigabit ethernet, that 1 gigabit = 120 megabytes/second, and we arrive at 4.8 gigabytes/second. Zow.

That's more than enough bandwidth to run even the highest of high end video cards, but it is not without overhead. There's a mild performance hit for running the card externally, on the order of 15%. There's also a further performance hit of 10% if you are in "loopback" mode on a laptop where you don't have an external display, so the video frames have to be shuttled back from the GPU to the internal laptop display.

This may look like a gamer-only thing, but surprisingly, it isn't. What you get is the general purpose ability to attach any PCI express card to any computer with a Thunderbolt 3 port and, for the most part, it just works!

Linus breaks it down and answers all your most difficult questions:

Please watch the above video closely if you're actually interested in this stuff; it is essential. I'll add some caveats of my own after working with the Razer Core for a while:

  • Make sure the video card you plan to put into the Razer Core is not too tall, or too wide. You can tell if a card is going to be too tall by looking at pictures of the mounting rear bracket. If the card extends significantly above the standard rear mounting bracket, it won't fit. If the card takes more than 2 slots in width, it also won't fit, but this is more rare. Depth (length) is rarely an issue.

  • There are four fans in the Razer Core and although it is reasonably quiet, it's not super silent or anything. You may want to mod the fans. The Razer Core is a remarkably simple device, internally, it's really just a power supply, some Thunderbolt 3 bridge logic, and a PCI express slot. I agree with Linus that the #1 area Razer could improve in the future, beyond generally getting the price down, is to use fewer and larger fans that run quieter.

  • If you're putting a heavy hitter GPU in the Razer Core, I'd try to avoid blower style cards (the ones that exhaust heat from the rear) in favor of those that cool with large fans blowing down and around the card. Dissipating 150w+ is no mean feat and you'll definitely need to keep the enclosure in open air … and of course within 0.5 meters of the computer it's connected to.

  • There is no visible external power switch on the Razer Core. It doesn't power on until you connect a TB3 cable to it. I was totally not expecting that. But once connected, it powers up and the Windows 10 Thunderbolt 3 drivers kick in and ask you to authorize the device, which I did (always authorize). Then it spun a bit, detected the new GPU, and suddenly I had multiple graphics card active on the same computer. I also installed the latest Nvidia drivers just to make sure everything was ship shape.

  • It's kinda ... weird having multiple GPUs simultaneously active. I wanted to make the Razer Core display the only display, but you can't really turn off the built in GPU – you can select "only use display 2", that's all. I got into several weird states where windows were opening on the other display and I had to mess around a fair bit to get things locked down to just one display. You may want to consider whether you have both "displays" connected for troubleshooting, or not.

And then, there I am, playing Lego Marvel in splitscreen co-op at glorious 3840 × 2160 UltraHD resolution on an amazing OLED display with my son. It is incredible.

Beyond the technical "because I could", I am wildly optimistic about the future of external Thunderbolt 3 expansion boxes, and here's why:

  • The main expense and bottleneck in any stonking gaming rig is, by far, the GPU. It's also the item you are most likely to need to replace a year or two from now.

  • The CPU and memory speeds available today are so comically fast that any device with a low-end i3-7100 for $120 will make zero difference in real world gaming at 1080p or higher … if you're OK with 30fps minimum. If you bump up to $200, you can get a quad-core i5-7500 that guarantees you 60fps minimum everywhere.

  • If you prefer a small system or a laptop, an external GPU makes it so much more flexible. Because CPU and memory speeds are already so fast, 99.9% of the time your bottleneck is the GPU, and almost any small device you can buy with a Thunderbolt 3 port can now magically transform into a potent gaming rig with a single plug. Thunderbolt 3 may be a bit cutting edge today, but more and more devices are shipping with Thunderbolt 3. Within a few years, I predict TB3 ports will be as common as USB3 ports.

  • A general purpose external PCI express enclosure will be usable for a very long time. My last seven video card upgrades were plug and play PCI Express cards that would have worked fine in any computer I've built in the last ten years.

  • External GPUs are not meaningfully bottlenecked by Thunderbolt 3 bandwidth; the impact is 15% to 25%, and perhaps even less over time as drivers and implementations mature. While Thunderbolt 3 has "only" PCI Express x4 bandwidth, many benchmarkers have noted that GPUs moving from PCI Express x16 to x8 has almost no effect on performance. And there's always Thunderbolt 4 on the horizon.

The future, as they say, is already here – it's just not evenly distributed.

I am painfully aware that costs need to come down. Way, way down. The $499 Razer Core is well made, on the vanguard of what's possible, a harbinger of the future, and fantastically enough, it does even more than what it says on the tin. But it's not exactly affordable.

I would absolutely love to see a modest, dedicated $200 external Thunderbolt 3 box that included an inexpensive current-gen GPU. This would clobber any onboard GPU on the planet. Let's compare my Skull Canyon NUC, which has Intel's fastest ever, PS4 class embedded GPU, with the modest $150 GeForce GTX 1050 Ti:

1920 × 1080 high detail Bioshock Infinite15 → 79 fps Rise of the Tomb Raider12 → 49 fps Overwatch43 → 114 fps

As predicted, that's a 3x-5x stompdown. Mac users lamenting their general lack of upgradeability, hear me: this sort of box is exactly what you want and need. Imagine if Apple was to embrace upgrading their laptops and all-in-one systems via Thunderbolt 3.

I know, I know. It's a stretch. But a man can dream … of externally upgradeable GPUs. That are too expensive, sure, but they are here, right now, today. They'll only get cheaper over time.

[advertisement] Find a better job the Stack Overflow way - what you need when you need it, no spam, and no scams.
Categories: Programming