Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://softdevblogs.com/' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

The Myth of "Discover by Doing"

Herding Cats - Glen Alleman - Wed, 07/27/2016 - 04:21

There is a popular Agile and No Estimates phrase...

It is by doing the work we discover the work we must do

This of course ignores the notion of engineering  or designing a solution to the needed Capabilities of the customer, BEFORE starting coding. It is certainly the case that some aspects of the software solution can only be confirmed when the working software is available for use. But like all good platitudes in the agile community, there is no domain or context as to where this phrase is applicable. Where can coding start before there is some sort of framework for how the needed capabilities will be delivered? 

  • Shall we just start coding and see what comes out?
  • How about just buying a COTS product a start installing it to see if it is going the meet our needs?

This sounds not only näive, but sounds like we're wandering around looking for a solution without any definition of what the problem is. When that is the approach, when the solution appears it may not be recognized as the solution. Agile is certainly the basis of dealing with emerging requirements. But all good agile processes have some sense of what the customer is looking for. 

This understanding of what capabilities the customer needs starts with a Product Roadmap. The Product Roadmap is a plan that matches short-term and long-term business goals with specific technology solutions to help meet those goals.

A Plan is a Strategy for success. All strategies have a hypothesis. A Hypothesis need to be tested. This is what working software does. It tests the hypothesis of the Strategy described in the Product Roadmap.

So if you have to do the work to discover what work must be done, you've got an Open Loop control system. To close the loop this emergent work needs to have a target to steer toward. With this target the working software can be compared to the desired working software. The variance between the two used to take corrective actions to steer toward the desired goals.

And of course, since the steering target (goal) and the path to this goal are both random variables - estimates will be needed to close the loop of the control processes used to reach the desired outcomes that meet the Capabilities requested by the customer.

Categories: Project Management

scitkit-learn: TF/IDF and cosine similarity for computer science papers

Mark Needham - Wed, 07/27/2016 - 03:45

A couple of months ago I downloaded the meta data for a few thousand computer science papers so that I could try and write a mini recommendation engine to tell me what paper I should read next.

Since I don’t have any data on which people read each paper a collaborative filtering approach is ruled out, so instead I thought I could try content based filtering instead.

Let’s quickly check the Wikipedia definition of content based filtering:

In a content-based recommender system, keywords are used to describe the items and a user profile is built to indicate the type of item this user likes.

In other words, these algorithms try to recommend items that are similar to those that a user liked in the past (or is examining in the present).

We’re going to focus on the finding similar items part of the algorithm and we’ll start simple by calculating the similarity of items based on their titles. We’d probably get better results if we used the full text of the papers or at least the abstracts but that data isn’t as available.

We’re going to take the following approach to work out the similarity between any pair of papers:

for each paper:
  generate a TF/IDF vector of the terms in the paper's title
  calculate the cosine similarity of each paper's TF/IDF vector with every other paper's TF/IDF vector

This is very easy to do using the Python scikit-learn library and I’ve actually done the first part of the process while doing some exploratory analysis of interesting phrases in the TV show How I Met Your Mother.

Let’s get started.

We’ve got one file per paper which contains the title of the paper. We first need to iterate through that directory and build an array containing the papers:

import glob
 
corpus = []
for file in glob.glob("papers/*.txt"):
    with open(file, "r") as paper:
        corpus.append((file, paper.read()))

Next we’ll build a TF/IDF matrix for each paper:

from sklearn.feature_extraction.text import TfidfVectorizer
 
tf = TfidfVectorizer(analyzer='word', ngram_range=(1,3), min_df = 0, stop_words = 'english')
tfidf_matrix =  tf.fit_transform([content for file, content in corpus])

Next we’ll write a function that will find us the top n similar papers based on cosine similarity:

from sklearn.metrics.pairwise import linear_kernel
 
def find_similar(tfidf_matrix, index, top_n = 5):
    cosine_similarities = linear_kernel(tfidf_matrix[index:index+1], tfidf_matrix).flatten()
    related_docs_indices = [i for i in cosine_similarities.argsort()[::-1] if i != index]
    return [(index, cosine_similarities[index]) for index in related_docs_indices][0:top_n]

Let’s try it out:

>>> corpus[1619]
('papers/221215.txt', 'TOTEM: a reliable ordered delivery protocol for interconnected local-area networks')
 
>>> for index, score in find_similar(tfidf_matrix, 1619):
       print score, corpus[index]
 
0.917540397202 ('papers/852338.txt', 'A reliable ordered delivery protocol for interconnected local area networks')
0.248736845733 ('papers/800897.txt', 'Interconnection of broadband local area networks')
0.207309089025 ('papers/103726.txt', 'High-speed local area networks and their performance: a survey')
0.204166719869 ('papers/161736.txt', 'High-speed switch scheduling for local-area networks')
0.198514433132 ('papers/627363.txt', 'Algorithms for Distributed Query Processing in Broadcast Local Area Networks')

It’s pretty good for finding duplicate papers!

>>> corpus[1599]
('papers/217470.txt', 'A reliable multicast framework for light-weight sessions and application level framing')
 
>>> for index, score in find_similar(tfidf_matrix, 1599):
       print score, corpus[index]
 
1.0            ('papers/270863.txt', 'A reliable multicast framework for light-weight sessions and application level framing')
0.139643354066 ('papers/218325.txt', 'The KryptoKnight family of light-weight protocols for authentication and key distribution')
0.134763799612 ('papers/1251445.txt', 'ALMI: an application level multicast infrastructure')
0.117630311817 ('papers/125160.txt', 'Ordered and reliable multicast communication')
0.117630311817 ('papers/128741.txt', 'Ordered and reliable multicast communication')

But sometimes it identifies duplicates that aren’t identical:

>>> corpus[5784]
('papers/RFC2616.txt', 'Hypertext Transfer Protocol -- HTTP/1.1')
 
>>> for index, score in find_similar(tfidf_matrix, 5784):
       print score, corpus[index]
 
1.0 ('papers/RFC1945.txt', 'Hypertext Transfer Protocol -- HTTP/1.0')
1.0 ('papers/RFC2068.txt', 'Hypertext Transfer Protocol -- HTTP/1.1')
0.232865694216 ('papers/131844.txt', 'XTP: the Xpress Transfer Protocol')
0.138876842331 ('papers/RFC1866.txt', 'Hypertext Markup Language - 2.0')
0.104775586915 ('papers/760249.txt', 'On the transfer of control between contexts')

Having said that, if you were reading and liked the HTTP 1.0 RFC the HTTP 1.1 RFC probably isn’t a bad recommendation.

There are obviously also some papers that get identified as being similar which aren’t. I created a CSV file containing 5 similar papers for each paper as long as the similarity is greater than 0.5. You can see the script that generates that file on github as well.

That’s as far as I’ve got for now but there are a couple of things I’m going to explore next:

  • How do we know if the similarity suggestions are any good? How do we measure good? Would using a term counting vector work better than TF/IDF?
  • Similarity based on abstracts as well as/instead of titles

All the code from this post for calculating similarities and writing them to CSV is on github as well so feel free to play around with it.

Categories: Programming

Test First and Test Driven Development: Is There a Difference?

Testing is about predicting the future!

Testing is about predicting the future!

Test-first development is an old concept that was rediscovered and documented by Kent Beck in Extreme Programming Explained (Chapter 13 in the Second Edition).  Test-first development (TFD) is an approach to development in which developers do not write a single line of code until they have created the test cases needed to prove that unit of work solves the business problem and is technically correct at a unit-test level. In a response to a question on Quora, Beck described reading about developers using a test-first approach well before XP and Agile. Test-driven development is test-first development combined with design and code refactoring.  Both test-first and test-driven development  are useful for improving quality, morale and trust and even though both are related they not the same.

A little more history.  Test-first programming / development was introduced (or re-introduced) as a primary practice of Extreme Programing in Chapter 7 of Extreme Programing Explained (page 50). Test-driven development as a method was described in Test Driven Development: By Example (2003, perhaps we will re-read this book in the future), and is an evolution of the test-first concept.

Test-first development has a few basic steps.

  1. The developer accepts a unit of work and writes a set of tests that will prove that the code actually functions correctly at a unit level.
  2. They then run the tests.  The tests should fail because the code to solve the business problem embedded in the unit of work has not been written.  If the tests pass, rewrite them so that they fail (assuming someone else has not fixed the problem).
  3. Write the code needed to solve the problem. Remember that simplicity is king and only write enough code to solve the problem.
  4. Run the test suite again. If the tests pass you are done; however, if ANY of the tests doesn’t pass return to step three and correct the code.  Repeat steps three and four until all tests pass.

Test-driven development, TDD (also known as test-driven design) adds one additional “step” to the process after the unit tests.

  1. Refactor the code and design to make both as simple as possible and remove any possible duplication.

As the code is written and refactored the design evolves based on the feedback gathered one story at a time. TDD integrates the practice of coding and unit testing with evolutionary design, breaking down the separation of roles that reduce collaboration and increase the cost of quality. A conceptual advance; however, there are organizations such as ATMs, automotive products and medical devices where the concept of evolutionary design is a bridge too far, leaving test first as their only option.

In TDD by Example, Beck identifies two “rules” for TDD that not directly identified in the introduction of TFD.  The first is to never write a line of code until you have written a failing automated test and the second is to avoid duplication. TFD recognized the need to combine manual and automated unit tests.  Both of these rules could be applied (and should be if possible) for both TDD and TFD, and in the long run are just good practice.

The only significant difference between test-first and test-driven development is a biggie – the inclusion of using the coding and unit testing feedback loop as a tool to propel incremental and emergent design techniques.   


Categories: Process Management

Introducing new app categories -- From Art to Autos to Dating -- to help users better find your apps

Android Developers Blog - Tue, 07/26/2016 - 23:14

Posted by By Sarah Karam, Google Play Apps Business Development

With more than 1 billion active users in 190 countries around the world, Google Play continues to be an important distribution platform for you to build a global audience. To help you get your apps in front of more users, it’s important to make them more quickly and easily discoverable in Google Play. That’s why we rolled out major features, such as Search Ads, Indie Corner, store listing experiments, and more, over the past year.

To improve the overall search experience, we’re introducing new app categories and renaming a few existing ones, making them more comprehensive and relevant to what users are looking for today.

The new categories include:

  • Art & Design
  • Auto & Vehicles
  • Beauty
  • Dating
  • Events
  • Food & Drink
  • House & Home
  • Parenting

In addition, the “Transportation” category will be renamed “Maps & Navigation,” and the “Media & Video” category will be renamed “Video Players & Editors.”

To select a new category for your app or game

  1. Sign in to your Google Play Developer Console.
  2. Select an app.
  3. On the left menu, click Store Listing.
  4. Under "Categorization," select an application type and category.
  5. Near the top of the page, click Save draft (new apps) or Submit update (existing apps).

Newly added categories will be available on Google Play within 60 days. If you choose a newly added category for an app before the category is available for users, your current app category may change. See additional details and view our full list of categories in the Help Center.

Categories: Programming

CC to Everyone

James Bach’s Blog - Tue, 07/26/2016 - 18:23
I sent this to someone who’s angry with me due to some professional matter we debated. A colleague thought it would be worth showing you, too. So, for whatever it’s worth:

I will say this. I don’t want anyone to feel bad about me, or about my behavior, or about themselves. I can live with that, but I don’t want it.

So, if there is something simple I can do to help people feel better, and it does not require me to tell a lie, then I am willing to do so.

I want people to excel at their craft and be happy. That’s actually what is motivating me, underneath all my arguing.

Categories: Testing & QA

Improvements for smaller app downloads on Google Play

Android Developers Blog - Tue, 07/26/2016 - 16:49

Posted by Anthony Morris, SWE Google Play

Google Play continues to grow rapidly, as Android users installed over 65 billion apps in the last year from the Google Play Store. We’re also seeing developers move to update their apps more frequently to push great new content, patch security vulnerabilities, and iterate quickly on user feedback.

However, many users are sensitive to the amount of data they use, especially if they are not on Wi-Fi. Google Play is investing in improvements to reduce the data that needs to be transferred for app installs and updates, while making data cost more transparent to users.

Read on to understand the updates and learn some tips for ways to optimize the size of your APK.

New Delta algorithm to reduce the size of app updates

For approximately 98% of app updates from the Play Store, only changes (deltas) to APK files are downloaded and merged with the existing files, reducing the size of updates. Google Play has used delta algorithms since 2012, and we recently rolled out an additional delta algorithm, bsdiff (created by Colin Percival1), that our experimentation shows can reduce delta size by up to 50% or more compared to the previous algorithm for some APKs. Bsdiff is specifically targeted to produce more efficient deltas of native libraries by taking advantage of the specific ways in which compiled native code changes between versions. To be most effective, native libraries should be stored uncompressed (compression interferes with delta algorithms).

An example from Chrome: Patch Description Previous patch size Bsdiff Size M46 to M47 major update 22.8 MB 12.9 MB M47 minor update 15.3 MB 3.6 MB

Apps that don’t have uncompressed native libraries can see a 5% decrease in size on average, compared to the previous delta algorithm.

Applying the delta algorithm to APK Expansion Files to further reduce update size

APK Expansion Files allow you to include additional large files up to 2GB in size (e.g. high resolution graphics or media files) with your app, which is especially popular with games. We have recently expanded our delta and compression algorithms to apply to these APK Expansion Files in addition to APKs, reducing the download size of initial installs by 12%, and updates by 65% on average. APK Expansion file patches use the xdelta algorithm.

Clearer size information in the Play Store

Alongside the improvements to reduce download size, we also made information displayed about data used and download sizes in the Play Store clearer. You can now see actual download sizes, not the APK file size, in the Play Store. If you already have an app, you will only see the update size. These changes are rolling out now.

  1. Colin Percival, Naive differences of executable code, http://www.daemonology.net/bsdiff/, 2003. 

Example 1: Showing new “Download size” of APK

Example 2: Showing new “Update size” of APK

Tips to reduce your download sizes

1. Optimize for the right size measurements: Users care about download size (i.e. how many bytes are transferred when installing/updating an app), and they care about disk size (i.e. how much space the app takes up on disk). It’s important to note that neither of these are the same as the original APK file size nor necessarily correlated.


Chrome example: Compressed Native Library Uncompressed Native Library APK Size 39MB 52MB (+25%) Download size (install) 29MB 29MB (no change) Download size (update) 29MB 21MB (-29%) Disk size 71MB 52MB (-26%)

Chrome found that initial download size remained the same by not compressing the native library in their APK, while the APK size increased, because Google Play already performs compression for downloads. They also found that the update size decreased, as deltas are more effective with uncompressed files, and disk size decreased as you no longer need an compressed copy of the native library. However, please note, native libraries should only be uncompressed when the minimum SDK version for an APK is 23 (Marshmallow) or later.

2. Reduce your APK size: Remove unnecessary data from the APK like unused resources and code.

3. Optimize parts of your APK to make them smaller: Using more efficient file formats, for example by using WebP instead of JPEG, or by using Proguard to remove unused code.

Read more about reducing APK sizes and watch the I/O 2016 session ‘Putting Your App on a Diet’ to learn from Wojtek Kaliciński, about how to reduce the size of your APK.

Categories: Programming

Quote of the Day

Herding Cats - Glen Alleman - Tue, 07/26/2016 - 04:38

If you can't explain what you are doing as a process, then you don't know what you are doing - Deming

Process is the  answer to the question How do we do things around here? All organizations should have a widely accepted Process for making decisions.  "A New Engineering Profession is Emerging: Decision Coach," IEEE Engineering Management Review, Vol. 44, No. 2, Second Quarter, June 2016

Categories: Project Management

I/O session: Location and Proximity Superpowers: Eddystone + Google Beacon Platform

Google Code Blog - Mon, 07/25/2016 - 19:10

Originally posted on Geo Developers blog

Bluetooth beacons mark important places and objects in a way that your phone understands. Last year, we introduced the Google beacon platform including Eddystone, Nearby Messages and the Proximity Beacon API that helps developers build beacon-powered proximity and location features in their apps.
Since then, we’ve learned that when deployment of physical infrastructure is involved, it’s important to get the best possible value from your investment. That’s why the Google beacon platform works differently from the traditional approach.
We don’t think of beacons as only pointing to a single feature in an app, or a single web resource. Instead, the Google beacon platform enables extensible location infrastructure that you can manage through your Google Developer project and reuse many times. Each beacon can take part in several different interactions: through your app, through other developers’ apps, through Google services, and the web. All of this functionality works transparently across Eddystone-UID and Eddystone-EID -- because using our APIs means you never have to think about monitoring for the individual bytes that a beacon is broadcasting.
For example, we’re excited that the City of Amsterdam has adopted Eddystone and the newly released publicly visible namespace feature for the foundation of their open beacon network. Or, through Nearby Notifications, Eddystone and the Google beacon platform enable explorers of the BFG Dream Jar Trail to discover cloud-updateable content in Dream Jars across London.
To make getting started as easy as possible we’ve provided a set of tools to help developers, including links to beacon manufacturers that can help you with Eddystone, Beacon Tools (for Android and iOS), the Beacon Dashboard, a codelab and of course our documentation. And, if you were not able to attend Google I/O in person this year, you can watch my session, Location and Proximity Superpowers: Eddystone + Google Beacon Platform: We can’t wait to see what you build! author image
About Peter: I am a Product Manager for the Google beacon platform, including the open beacon format Eddystone, and Google's cloud services that integrate beacon technology with first and third party apps. When I’m not working at Google I enjoy taking my dog, Oscar, for walks on Hampstead Heath.
Categories: Programming

SE-Radio Episode 263: Camille Fournier on Real-World Distributed Systems

Stefan Tilkov talks to Camille Fournier about the challenges developers face when building distributed systems. Topics include the definition of a distributed system, whether developers can avoid building them at all, and what changes occur once they choose to. They also talk about the role distributed consensus tools such as Apache Zookeeper play, and whether […]
Categories: Programming

Scrum Day Europe 2016

Xebia Blog - Mon, 07/25/2016 - 10:50
During the 5th edition of Scrum Day Europe, Laurens and I facilitated a workshop on how to “Add Visual Flavor to Your Organization Transformation with Videoscribe.” The theme of the conference, “The Next Iteration,”  was all about the future of Scrum. We wanted to tie our workshop into the theme of the conference, so we had

The Raspberry Pi Has Revolutionized Emulation

Coding Horror - Jeff Atwood - Sun, 07/24/2016 - 23:12

Every geek goes through a phase where they discover emulation. It's practically a rite of passage.

I think I spent most of my childhood – and a large part of my life as a young adult – desperately wishing I was in a video game arcade. When I finally obtained my driver's license, my first thought wasn't about the girls I would take on dates, or the road trips I'd take with my friends. Sadly, no. I was thrilled that I could drive myself to the arcade any time I wanted.

My two arcade emulator builds in 2005 satisfied my itch thoroughly. I recently took my son Henry to the California Extreme expo, which features almost every significant pinball and arcade game ever made, live and in person and real. He enjoyed it so much that I found myself again yearning to share that part of our history with my kids – in a suitably emulated, arcade form factor.

Down, down the rabbit hole I went again:

I discovered that emulation builds are so much cheaper and easier now than they were when I last attempted this a decade ago. Here's why:

  1. The ascendance of Raspberry Pi has single-handedly revolutionized the emulation scene. The Pi is now on version 3, which adds critical WiFi and Bluetooth functionality on top of additional speed. It's fast enough to emulate N64 and PSX and Dreamcast reasonably, all for a whopping $35. Just download the RetroPie bootable OS on a $10 32GB SD card, slot it into your Pi, and … well, basically you're done. The distribution comes with some free games on it. Add additional ROMs and game images to taste.

  2. Chinese all-in-one JAMMA cards are available everywhere for about $90. Pandora's Box is one "brand". These things are are an entire 60-in-1 to 600-in-1 arcade on a board, with an ARM CPU and built-in ROMs and everything … probably completely illegal and unlicensed, of course. You could buy some old broken down husk of an arcade game cabinet, anything at all as long as it's a JAMMA compatible arcade game – a standard introduced in 1985 – with working monitor and controls. Plug this replacement JAMMA box in, and bam: you now have your own virtual arcade. Or you could build or buy a new JAMMA compatible cabinet; there are hundreds out there to choose from.

  3. Cheap, quality arcade size IPS LCDs of 18-23". The CRTs I used in 2005 may have been truer to old arcade games, but they were a giant pain to work with. They're enormous, heavy, and require a lot of power. Viewing angle and speed of refresh are rather critical for arcade machines, and both are largely solved problems for LCDs at this point, which are light, easy to work with, and sip power for $100 or less.

Add all that up – it's not like the price of MDF or arcade buttons and joysticks has changed substantially in the last decade – and what we have today is a console and arcade emulation wonderland! If you'd like to go down this rabbit hole with me, bear in mind that I've just started, but I do have some specific recommendations.

Get a Raspberry Pi starter kit. I recommend this particular starter kit, which includes the essentials: a clear case, heatsinks – you definitely want small heatsinks on your 3, as it dissipate almost 4 watts under full load – and a suitable power adapter. That's $50.

Get a quality SD card. The primary "drive" on your Pi will be the SD card, so make it a quality one. Based on these excellent benchmarks, I recommend the Sandisk Extreme 32GB or Samsung Evo+ 32GB models for best price to peformance ratio. That'll be $15, tops.

Download and install the bootable RetroPie image on your SD card. It's amazing how far this project has come since 2013, it is now about as close to plug and play as it gets for free, open source software. The install is, dare I say … "easy"?

Decide how much you want to build. At this point you have a fully functioning emulation brain for well under $100 which is capable of playing literally every significant console and arcade game created prior to 1997. Your 1985 self is probably drunk with power. It is kinda awesome. Stop doing the Safety Dance for a moment and ask yourself these questions:

  • What controls do you plan to plug in via the USB ports? This will depend heavily on which games you want to play. Beyond the absolute basics of joystick and two buttons, there are Nintendo 64 games (think analog stick(s) required), driving games, spinner and trackball games, multiplayer games, yoke control games (think Star Wars), virtual gun games, and so on.

  • What display to you plan to plug in via the HDMI port? You could go with a tiny screen and build a handheld emulator, the Pi is certainly small enough. Or you could have no display at all, and jack in via HDMI to any nearby display for whatever gaming jamboree might befall you and your friends. I will say that, for whatever size you build, more display is better. Absolutely go as big as you can in the allowed form factor, though the Pi won't effectively use much more than a 1080p display maximum.

  • How much space do you want to dedicate to the box? Will it be portable? You could go anywhere from ultra-minimalist – a control box you can plug into any HDMI screen with a wireless controller – to a giant 40" widescreen stand up arcade machine with room for four players.

  • What's your budget? We've only spent under $100 at this point, and great screens and new controllers aren't a whole lot more, but sometimes you want to build from spare parts you have lying around, if you can.

  • Do you have the time and inclination to build this from parts? Or do you prefer to buy it pre-built?

These are all your calls to make. You can get some ideas from the pictures I posted at the top of this blog post, or search the web for "Raspberry Pi Arcade" for lots of other ideas.

As a reasonable all-purpose starting point, I recommend the Build-Your-Own-Arcade kits from Retro Built Games. From $330 for full kit, to $90 for just the wood case.

You could also buy the arcade controls alone for $75, and build out (or buy) a case to put them in.

My "mainstream" recommendation is a bartop arcade. It uses a common LCD panel size in the typical horizontal orientation, it's reasonably space efficient and somewhat portable, while still being comfortably large enough for a nice big screen with large speakers gameplay experience, and it supports two players if that's what you want. That'll be about $100 to $300 depending on options.

I remember spending well over $1,500 to build my old arcade cabinets. I'm excited that it's no longer necessary to invest that much time, effort or money to successfully revisit our arcade past.

Thanks largely to the Raspberry Pi 3 and the RetroPie project, this is now a simple Maker project you can (and should!) take on in a weekend with a friend or family. For a budget of $100 to $300 – maybe $500 if you want to get extra fancy – you can have a pretty great classic arcade and classic console emulation experience. That's way better than I was doing in 2005, even adjusting for inflation.

[advertisement] At Stack Overflow, we put developers first. We already help you find answers to your tough coding questions; now let us help you find your next job.
Categories: Programming

SPaMCAST 404 – Ryan Ripley, The Business of Agile

SPaMCAST Logo

http://www.spamcast.net

Listen Now
Subscribe on iTunes
Check out the podcast on Google Play Music

Software Process and Measurement Cast 404 features our interview with Ryan Ripley.  We discussed The Business of Agile: Better, Faster, Cheaper at Agile. We discussed why having the answer for whether Agile is better, faster and cheaper is still important in the business world. Along the way we wrestled with the concept of value and why having value sooner is not the same as going fast.  

Ryan Ripley has worked on agile teams for the past 10 years in development, scrum master, and management roles. He’s worked at various Fortune 500 companies in the medical device, wholesale, and financial services industries.

Ryan is great at taking tests and holds the PMI-ACP, PSM I, PSM II, PSE, PSPO I, PSD I, CSM and CSPO agile certifications.

Ryan lives in Indiana with his wife Kristin and their three children.

He blogs at ryanripley.com and hosts the Agile for Humans podcast.

You can also follow Ryan on twitter: @ryanripley

Re-Read Saturday News

This week we continue our re-read of Kent Beck’s XP Explained, Second Edition with a discussion of Chapters 9 and 10. It is great to see the concepts we explored when we re-read Goldratt’s The Goal come back to roost.  This week we focus on roles, the definition of team, flow and more flow.    

Use the link to XP Explained in the show notes when you buy your copy to read along to support both the blog and podcast. Visit the Software Process and Measurement Blog (www.tcagley.wordpress.com) to catch up on past installments of Re-Read Saturday.

Next SPaMCAST

The next Software Process and Measurement Cast will feature our essay productivity.  A lot of people would tell you productivity does not matter or that discussing productivity in today’s Agile world is irrational. They are wrong. Productivity is about jobs. We will also have columns from the QA Corner and for Jon Quigley.  I think 405 might be just a bit controversial.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

SPaMCAST 404 - Ryan Ripley, The Business of Agile

Software Process and Measurement Cast - Sun, 07/24/2016 - 22:00

Software Process and Measurement Cast 404 features our interview with Ryan Ripley.  We discussed The Business of Agile: Better, Faster, Cheaper at Agile. We discussed why having the answer for whether Agile is better, faster and cheaper is still important in the business world. Along the way we wrestled with the concept of value and why having value sooner is not the same as going fast.  

Ryan Ripley has worked on agile teams for the past 10 years in development, scrum master, and management roles. He’s worked at various Fortune 500 companies in the medical device, wholesale, and financial services industries.

Ryan is great at taking tests and holds the PMI-ACP, PSM I, PSM II, PSE, PSPO I, PSD I, CSM and CSPO agile certifications.

Ryan lives in Indiana with his wife Kristin and their three children.

He blogs at ryanripley.com and hosts the Agile for Humans podcast.

You can also follow Ryan on twitter: @ryanripley

 

Re-Read Saturday News

This week we continue our re-read of Kent Beck’s XP Explained, Second Edition with a discussion of Chapters 9 and 10. It is great to see the concepts we explored when we re-read Goldratt’s The Goal come back to roost.  This week we focus on roles, the definition of team, flow and more flow.    

Use the link to XP Explained in the show notes when you buy your copy to read along to support both the blog and podcast. Visit the Software Process and Measurement Blog (www.tcagley.wordpress.com) to catch up on past installments of Re-Read Saturday.

Next SPaMCAST

The next Software Process and Measurement Cast will feature our essay productivity.  A lot of people would tell you productivity does not matter or that discussing productivity in today’s Agile world is irrational. They are wrong. Productivity is about jobs. We will also have columns from the QA Corner and for Jon Quigley.  I think 405 might be just a bit controversial.

 

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

Extreme Programming Explained, Second Edition: Week 6

XP Explained

This week we tackle teams in XP and why XP works based on the Theory of Constraints in Extreme Programing Explained, Second Edition (2005). The two chapters are linked by the idea that work is delivered most effectively when  teams or organizations achieve a consistent flow.

Chapter 10 -The Whole XP Team

The principal flow, as described in our re-read of Goldratt’s The Goal, proves that more value is created when a system achieves a smooth and steady stream of output. In order to achieve a state of flow, everyone on the team needs to be linked to the work to reduce delay and friction between steps.  Implementing the steps necessary to address complex work within a team is often at odds with how waterfall projects break work down based on specialties. Unless the barriers between specialties are broken down it is hard to get people to agree that you can work incrementally in small chunks versus specialty-based phases such as planning, analysis, design and more.

Every “specialty” needs to understand their role in XP. 

Testers – XP assumes that programmers using XP take on the responsibility for catching unit-level mistakes. XP uses the concept of test-first programming.  In test-first programming, the team begins each cycle (sprint in Scrum) by writing tests that will fail until the code is written. Once the tests are written and executed to prove they will fail, the team writes the code and runs the tests until they pass. It is at least a partial definition of done. As the team uncovers new details, new tests will be specified and incorporated into the team’s test suite.  When testers are not directly involved writing and executing tests they can work on extending automated testing.

Interaction designers – Interaction designers work with customers to write and clarify stories. Interaction designers deliver analysis of actual usage of the system to decide what the system needs to do next. The interaction designer in XP would also encompass the UX and UI designer roles as they have evolved since XP Explained was written and updated.  The designer tends to be a bit in front of the developers to reduces potential delays.

Architects – Architects help the team to keep the big picture in mind as development progresses in small incremental steps.  Similar to the interaction designer, the architect evolves the big picture just enough ahead of the development team to provide direction (SAFe call this the architectural runway). Evolving the architecture in small steps and gathering feedback as development progress from incremental system testing reduces the risk that the project will wander off track.

Project managers – Project managers (PM) facilitate communication both inside the team and between customers, suppliers and the rest of the organization. Beck suggests that the PMO act as the team historians.  Project managers keep the team “plan” synchronized with the reality based on how they are performing and the world happening outside the team.

Product managers – Product managers write stories, pick themes and stories for the quarterly cycle, pick stories in the weekly cycle, answer questions as development progress and helps when new information is uncovered. The product manager helps the whole team prioritize work.  (Note: it is different from the concept of the product owner in Scrum).  The product manager should help the team focus on pieces of work that allow the system to be whole at the end of every cycle.

Executives – The executives’ role in XP is to provide an environment for a team so they have courage, confidence, and accountability. Beck suggests that executives trust the metrics.  The first metric is the number of defects found after development.  The fewer the better. The second metric that executives should leverage to build trust in XP is the time lag between idea inception and when the idea begins generating revenue.  This metric is also known as “concept to cash” (faster is better).

Technical writers – In XP, the technical writer role generates feedback by asking the question, “How can I explain that?” The tech writer can also help to create a closer relationship with users as they help them to learn about the product, listen to their feedback and then to address any confusion between the development team and the user community. Embedding the tech writer role into the XP team allows the team to get feedback on a more timely basis, rather than waiting until much later in the development cycle.

Users – Users help write user stories, provide business intelligence and make business domain decisions as development progress.  Users must be able to speak for the larger business community, they need to command a broad consensus for the decisions they make.  If users can’t command a broad consensus from the business community for the decisions they make, they should let the team work on something else first while they get thier ducks in a row.

Programmers – Programmers estimate stories and tasks, they break stories down into smaller pieces, write tests, write code, run tests, automate tedious development processes, and gradually improve the design of the system.  As with all roles in XP, the most valuable developers combine specialization with a broad set of capabilities.

Human resources – Human resources needs to find a way to hire the right individuals and then to evaluate teams.  Evaluating teams require changing the review process to focus on teams, rather than on individuals.

XP addressed roles that most discussions of Scrum have ignored, but that are needed to deliver a project. Roles should not be viewed as a rigid set of specialties that every project requires at every moment.  Teams and organizations need to add and subtract roles as needed. XP team members need to have the flexibility to shift roles as needed to maximize the flow of work.  

Chapter 11 – The Theory of Constraints

In order to find opportunities for improvement in the development process using XP, begin by determining which problems are development problems and which are caused outside of the development process. This first step is important because XP is only focused on the software development process (areas like marketing are out of scope). One approach for improving software development is to look at the throughput of the software development process.  The theory of constraints (ToC) is a system thinking approach for process improvement.  A simple explanation of ToC is that the output of any system or process is limited by a very small number of constraints within the process.  Using the ToC to measure the throughput of the development process (a system from the point of view of the ToC) provides the basis for identifying constraints, making a change and then finding the next constraint.  Using the ToC as an improvement approach maximizes the output of the overall development process rather focusing on the local maximization of steps.  

The theory of constraints is not a perfect fit for software development because software development is more influenced by people, therefore, is more variable than the mechanical transformation of raw materials. An over-reliance on the concepts like the ToC will tend to overemphasize process and engineering solutions over people solutions, such as a team approach. This is a caution rather than a warning to avoid process approaches. In addition, systems approaches can highlight issues outside of development’s span of control.  Getting others in the organization to recognize issues they are not ready to accept or address can cause conflict. Beck ends the chapter with the advice, “If you don’t have executive sponsorship, be prepared to do a better job yourself without recognition or protection.” Unsaid is that you will also have to be prepared to for the consequences of your behavior.   

Previous installments of Extreme Programing Explained, Second Edition (2005) on Re-read Saturday:

Extreme Programming Explained: Embrace Change Second Edition Week 1, Preface and Chapter 1

Week 2, Chapters 2 – 3

Week 3, Chapters 4 – 5

Week 4, Chapters 6 – 7  

Week 5, Chapters 8 – 9

 


Categories: Process Management

Stuff The Internet Says On Scalability For July 22nd, 2016

Hey, it's HighScalability time:


It's not too late London. There's still time to make this happen

 

If you like this sort of Stuff then please support me on Patreon.
  • 40%: energy Google saves in datacenters using machine learning; 2.3: times more energy knights in armor spend than when walking; 1000x: energy efficiency of 3D carbon nanotubes over silicon chips; 176,000: searchable documents from the Founding Fathers of the US; 93 petaflops: China’s Sunway TaihuLight; $800m: Azure's quarterly revenue; 500 Terabits per square inch: density when storing a bit with an atom; 2 billion: Uber rides; 46 months: jail time for accessing a database; 

  • Quotable Quotes:
    • Lenin: There are decades where nothing happens; and there are weeks where decades happen.
    • Nitsan Wakart: I have it from reliable sources that incorrectly measuring latency can lead to losing ones job, loved ones, will to live and control of bowel movements.
    • Margaret Hamilton~ part of the culture on the Apollo program “was to learn from everyone and everything, including from that which one would least expect.”
    • @DShankar: Basically @elonmusk plans to compete with -all vehicle manufacturers (cars/trucks/buses) -all ridesharing companies -all utility companies
    • @robinpokorny: ‘Number one reason for types is to get idea what the hell is going on.’ @swannodette at #curryon
    • Dan Rayburn: Some have also suggested that the wireless carriers are seeing a ton of traffic because of Pokemon Go, but that’s not the case. Last week, Verizon Wireless said that Pokemon Go makes up less than 1% of its overall network data traffic.
    • @timbaldridge: When people say "the JVM is slow" I wonder to what dynamic, GC'd, runtime JIT'd, fully parallel, VM they are comparing it to.
    • @papa_fire: “Burnout is when long term exhaustion meets diminished interest.”  May be the best definition I’ve seen.
    • Sheena Josselyn: Linking two memories was very easy, but trying to separate memories that were normally linked became very difficult
    • @mstine: if your microservices must be deployed as a complete set in a specific order, please put them back in a monolith and save yourself some pain
    • teaearlgraycold: Some people, when confronted with a problem, think “I know, I'll use regular expressions.” Now they have two problems.
    • Erik Duindam:  I bake minimum viable scalability principles into my app.
    • Hassabis: It [DeepMind] controls about 120 variables in the data centers. The fans and the cooling systems and so on, and windows and other things. They were pretty astounded.
    • @WhatTheFFacts: In 1989, a new blockbuster store was opening in America every 17 hours.
    • praptak: It [SRE] changes the mindset from "Failure? Just log an error, restore some 'good'-ish state and move on to the next cool feature." towards "New cool feature? What possible failures will it cause? How about improving logging and monitoring on our existing code instead?"
    • plusepsilon: I transitioned from using Bayesian models in academia to using machine learning models in industry. One of the core differences in the two paradigms is the "feel" when constructing models. For a Bayesian model, you feel like you're constructing the model from first principles. You set your conditional probabilities and priors and see if it fits the data. I'm sure probabilistic programming languages facilitated that feeling. For machine learning models, it feels like you're starting from the loss function and working back to get the best configuration

  • Isn't it time we admit Dark Energy and Dark Matter are simply optimizations in the algorithms running the sim of our universe? Occam's razor. Even the Eldritch engineers of our creation didn't have enough compute power to simulate an entire universe. So they fudged a bit. What's simpler than making 90 percent of matter in our galaxy invisible?

  • Do you have one of these? Google has a Head of Applied AI.

  • Uber with a great two article series on their stack. Part unoPart deux: Our business runs on a hybrid cloud model, using a mix of cloud providers and multiple active data centers...We currently use Schemaless (built in-house on top of MySQL), Riak, and Cassandra...We use Redis for both caching and queuing. Twemproxy provides scalability of the caching layer without sacrificing cache hit rate via its consistent hashing algorithm. Celery workers process async workflow operations using those Redis instances...for logging, we use multiple Kafka clusters...This data is also ingested in real time by various services and indexed into an ELK stack for searching and visualizations...We use Docker containers on Mesos to run our microservices with consistent configurations scalably...Aurora for long-running services and cron jobs...Our service-oriented architecture (SOA) makes service discovery and routing crucial to Uber’s success...we’re moving to a pub-sub pattern (publishing updates to subscribers). HTTP/2 and SPDY more easily enable this push model. Several poll-based features within the Uber app will see a tremendous speedup by moving to push....we’re prioritizing long-term reliability over debuggability...Phabricator powers a lot of internal operations, from code review to documentation to process automation...We search through our code on OpenGrok...We built our own internal deployment system to manage builds. Jenkins does continuous integration. We combined Packer, Vagrant, Boto, and Unison to create tools for building, managing, and developing on virtual machines. We use Clusto for inventory management in development. Puppet manages system configuration...We use an in-house documentation site that autobuilds docs from repositories using Sphinx...Most developers run OSX on their laptops, and most of our production instances run Linux with Debian Jessie...At the lower levels, Uber’s engineers primarily write in Python, Node.js, Go, and Java...We rip out and replace older Python code as we break up the original code base into microservices. An asynchronous programming model gives us better throughput. And lots more.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Mahout/Hadoop: org.apache.hadoop.ipc.RemoteException: Server IPC version 9 cannot communicate with client version 4

Mark Needham - Fri, 07/22/2016 - 14:55

I’ve been working my way through Dragan Milcevski’s mini tutorial on using Mahout to do content based filtering on documents and reached the final step where I needed to read in the generated item-similarity files.

I got the example compiling by using the following Maven dependency:

<dependency>
      <groupId>org.apache.mahout</groupId>
      <artifactId>mahout-core</artifactId>
      <version>0.9</version>
</dependency>

Unfortunately when I ran the code I ran into a version incompatibility problem:

Exception in thread "main" org.apache.hadoop.ipc.RemoteException: Server IPC version 9 cannot communicate with client version 4
	at org.apache.hadoop.ipc.Client.call(Client.java:1113)
	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
	at com.sun.proxy.$Proxy1.getProtocolVersion(Unknown Source)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:497)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
	at com.sun.proxy.$Proxy1.getProtocolVersion(Unknown Source)
	at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
	at org.apache.hadoop.hdfs.DFSClient.createNamenode(DFSClient.java:183)
	at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:281)
	at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:245)
	at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:100)
	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1446)
	at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:67)
	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1464)
	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:263)
	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:124)
	at com.markhneedham.mahout.Similarity.getDocIndex(Similarity.java:86)
	at com.markhneedham.mahout.Similarity.main(Similarity.java:25)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:497)
	at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)

Version 0.9.0 of mahout-core was published in early 2014 so I expect it was built against an earlier version of Hadoop than I’m using (2.7.2).

I tried updating the Hadoop dependencies that were being called in the stack trace to no avail.

<dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-client</artifactId>
    <version>2.7.2</version>
</dependency>
 
<dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-hdfs</artifactId>
    <version>2.7.2</version>
</dependency>

When stepping through the stack trace I noticed that my program was still using an old version of hadoop-core, so with one last throw of the dice I decided to try explicitly excluding that:

<dependency>
    <groupId>org.apache.mahout</groupId>
    <artifactId>mahout-core</artifactId>
    <version>0.9</version>
 
    <exclusions>
        <exclusion>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-core</artifactId>
        </exclusion>
    </exclusions>
</dependency>

And amazingly it worked. Now, finally, I can see how similar my documents are!

Categories: Programming

Hadoop: DataNode not starting

Mark Needham - Fri, 07/22/2016 - 14:31

In my continued playing with Mahout I eventually decided to give up using my local file system and use a local Hadoop instead since that seems to have much less friction when following any examples.

Unfortunately all my attempts to upload any files from my local file system to HDFS were being met with the following exception:

java.io.IOException: File /user/markneedham/book2.txt could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1448)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:690)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:342)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1350)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1346)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:742)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1344)
 
at org.apache.hadoop.ipc.Client.call(Client.java:905)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
at $Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy0.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:928)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:811)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:427)

I eventually realised, from looking at the output of jps, that the DataNode wasn’t actually starting up which explains the error message I was seeing.

A quick look at the log files showed what was going wrong:


/usr/local/Cellar/hadoop/2.7.1/libexec/logs/hadoop-markneedham-datanode-marks-mbp-4.zte.com.cn.log

2016-07-21 18:58:00,496 WARN org.apache.hadoop.hdfs.server.common.Storage: java.io.IOException: Incompatible clusterIDs in /usr/local/Cellar/hadoop/hdfs/tmp/dfs/data: namenode clusterID = CID-c2e0b896-34a6-4dde-b6cd-99f36d613e6a; datanode clusterID = CID-403dde8b-bdc8-41d9-8a30-fe2dc951575c
2016-07-21 18:58:00,496 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to /0.0.0.0:8020. Exiting.
java.io.IOException: All specified directories are failed to load.
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:477)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1361)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1326)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:316)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:801)
        at java.lang.Thread.run(Thread.java:745)
2016-07-21 18:58:00,497 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to /0.0.0.0:8020
2016-07-21 18:58:00,602 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned)
2016-07-21 18:58:02,607 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2016-07-21 18:58:02,608 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2016-07-21 18:58:02,610 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:

I’m not sure how my clusterIDs got out of sync, although I expect it’s because I reformatted HDFS without realising at some stage. There are other ways of solving this problem but the quickest for me was to just nuke the DataNode’s data directory which the log file told me sits here:

sudo rm -r /usr/local/Cellar/hadoop/hdfs/tmp/dfs/data/current

I then re-ran the hstart script that I stole from this tutorial and everything, including the DataNode this time, started up correctly:

$ jps
26736 NodeManager
26392 DataNode
26297 NameNode
26635 ResourceManager
26510 SecondaryNameNode

And now I can upload local files to HDFS again. #win!

Categories: Programming

Customer Satisfaction Metrics and Quality

On a scale of fist to five, I'm at a ten.

On a scale of fist to five, I’m at a ten.

Quality is partly about the number defects delivered in a piece of software and partly about how the stakeholders and customers experience the software.  Experience is typically measured as customer satisfaction. Customer satisfaction is a measure of how products and services supplied by a company meet or surpass customer expectations. Customer satisfaction is impacted by all three aspects of software quality: functional (what the software does), structural (whether the software meets standards) and process (how the code was built).

Surveys can be used to collect customer and team level data.  Satisfaction is used to measure products, services, behaviors or work environment meet expectations.

  1. Asking:  Asking the question, “are you happy (or some variant of the word happy) with the results of XYZ project?” is an assessment of satisfaction. The answer to that simple question will indicate whether the people you are asking are “happy”, or whether you need to ask more questions.  Asking is a powerful tool and can be as simple as asking a single question to a team or group of customers or as complicated using multifactor surveys. Even though just asking whether someone is satisfied and then listening to the answer can provide powerful information, the size of projects or the complexity of software being delivered often dictate a more formal approach, which means that surveys are often used to collect satisfaction data.  Product or customer satisfaction is typically measured after a release or on a periodic basis.

    Fist to Five, a simple asking technique: Agile teams measure team level satisfaction using simple techniques such Fist-to-Five.  Fist-to-five is a simple asking technique in which team members are asked to vote on how satisfied they are by flashing a number of fingers all at the same time.  Showing five fingers means you are very satisfied and a fist (no fingers) is unsatisfied.  This form of measurement can be used to assess team satisfaction on a daily basis. (A simple video explanation) I generally post an average score on the wall in the team room in order to track the team’s satisfaction trend.

  2. The Net Promoter metric is a more advanced form of a customer satisfaction measure than simple asking but less complicated than the multifactor indexes that are sometimes generated. Promoters are people who are so satisfied that they will actively spread knowledge to others. Generating the metric begins by asking “how likely you are to recommend the product or organization being measured to a friend or colleague?” I have seen many variants of the net promoter question but at the heart of it, the question is whether the respondent will recommend the service, product, team or organization.  The response is scored using a scale from 1 – 10.  Answers of 10 or 9 represent promoters, 7 or 8 are neutral and all other answers represent detractors. The score is calculated using the following formula: (# of Promoters — # of Detractors) / (Total Promoters + Neutral + Detractors) x 100.   If ten people responded to a net promoter question and 5 where promoters, 3 neutral and 2 detractors the net promoter score is 30 (5 -2 /10 *100).  Over time the goal is to improve the net promoter score which will increase the chance your work will be recommended.

Software quality is a nuanced concept that reflects many factors, some of which are functional, structural or process related. Satisfaction is a reflection of quality from a different perspective than measuring defects or code structure. The essence of customer satisfaction is the very simple question, are you happy with what we delivered? Knowing if the team, stakeholders, and customers are happy with what was delivered or the path that was taken to get to that delivery is often just as important as knowing the number of defects that were delivered.


Categories: Process Management

Quote of the Day

Herding Cats - Glen Alleman - Thu, 07/21/2016 - 19:03

A skeptic will question claims, then embrace the evidence. A denier will question claims, then reject the evidence. - Neil deGrase Tyson

Think of this whenever there is a conjecture that has no testable evidence of the claim. And think ever more when those making the conjectured claim refuse to provide evidence. When that is the case, it is appropriate to ignore the conjecture all together 

Categories: Project Management

Mahout: Exception in thread “main” java.lang.IllegalArgumentException: Wrong FS: file:/… expected: hdfs://

Mark Needham - Thu, 07/21/2016 - 18:57

I’ve been playing around with Mahout over the last couple of days to see how well it works for content based filtering.

I started following a mini tutorial from Stack Overflow but ran into trouble on the first step:

bin/mahout seqdirectory \
--input file:///Users/markneedham/Downloads/apache-mahout-distribution-0.12.2/foo \
--output file:///Users/markneedham/Downloads/apache-mahout-distribution-0.12.2/foo-out \
-c UTF-8 \
-chunk 64 \
-prefix mah
16/07/21 21:19:20 INFO AbstractJob: Command line arguments: {--charset=[UTF-8], --chunkSize=[64], --endPhase=[2147483647], --fileFilterClass=[org.apache.mahout.text.PrefixAdditionFilter], --input=[file:///Users/markneedham/Downloads/apache-mahout-distribution-0.12.2/foo], --keyPrefix=[mah], --method=[mapreduce], --output=[file:///Users/markneedham/Downloads/apache-mahout-distribution-0.12.2/foo-out], --startPhase=[0], --tempDir=[temp]}
16/07/21 21:19:20 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/07/21 21:19:20 INFO deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
16/07/21 21:19:20 INFO deprecation: mapred.compress.map.output is deprecated. Instead, use mapreduce.map.output.compress
16/07/21 21:19:20 INFO deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
Exception in thread "main" java.lang.IllegalArgumentException: Wrong FS: file:/Users/markneedham/Downloads/apache-mahout-distribution-0.12.2/foo, expected: hdfs://localhost:8020
	at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:646)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:194)
	at org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:106)
	at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
	at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301)
	at org.apache.mahout.text.SequenceFilesFromDirectory.runMapReduce(SequenceFilesFromDirectory.java:156)
	at org.apache.mahout.text.SequenceFilesFromDirectory.run(SequenceFilesFromDirectory.java:90)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
	at org.apache.mahout.text.SequenceFilesFromDirectory.main(SequenceFilesFromDirectory.java:64)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
	at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
	at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:152)
	at org.apache.mahout.driver.MahoutDriver.main(MahoutDriver.java:195)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

I was trying to run the command against the local file system on my laptop which should have been possible according to the instructions. I couldn’t find any flag I could pass any flag that I could pass to Mahout to tell it not to use HDFS but I eventually stumbled on someone else experiencing a similar problem.

It turns out the last time I was playing around with Hadoop, in late 2015, I’d actually configured that and had completely forgotten. I needed to comment out the following config:

/usr/local/Cellar/hadoop/2.7.1/libexec/etc/hadoop/core-site.xml

<property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:8020</value>
</property>

I commented that property out and all was happy with the (Hadoop) world again.

Categories: Programming