Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

The Science of Successful Organizational Change: Re-read Week 3 Led by Steven Adams: Chapter 1: Failed Change

The Science of Successful Organizational Change

The Science of Successful Organizational Change

This week Steven starts on the numbered chapters of  (the introduction was content rich) Paul Gibbons’ book The Science of Successful Organizational Change.  Remember to use the link to buy a copy to support the author, the podcast, and the blog!

This wee k Steven sent me a note indicating he now understood that the re-read process really causes the person leading the process to gets much deeper into a book leading to a need to write!   As usual, I will add my comments in the comments of for this entry. РTom

Chapter 1: ‚ÄúFailed Change:¬† The Greatest Preventable Cost to Business?‚ÄĚ

Project Failures

Change project failures are at 70%, or is it 50%?  Gibbons coach us to question and dig down to get at the truth in this chapter.  Plus, introduces us to many of his ideas around change management he will later discuss within this book.

Gibbons helps us better understand the question ‚ÄúChange project failures are at 70% or is it 50%?‚ÄĚ, consider these 4 points about this question.

Point 1:¬† The oft quoted 70% failure rate of change programs is likely only a ‚Äúmodest exaggeration‚ÄĚ.¬† ‚ÄúThe statistic ‚Äė70% fails‚Äô was based on a survey data published in a non-peer-reviewed magazine and on out-of-context remarks by two well respected Harvard professors (Kotter and Nohria)‚ÄĚ [footnote provided on page 18] (p. 18)

Point 2:  50% failure rate comes from several surveys and is probably closer to what the failure rate actually is.  However, Gibbons warns that surveys asking if change initiatives are successful or fail are not robustly scientific (p. 19) but certainly is more accurate than the 70% quoted by consulting firms to engage with clients noted in point 1.

Point 3:  The definition of failure is at best loose.  Do we need to ask what does failure really means?  Gibbons reference the SOCKS taxonomy of project failures to help us better understand and define failure.  SOCKS stands for:

  1. S = projects that fail because of a shortfall of some expected benefit.
  2. O = projects that fail because of cost overruns.
  3. C = projects that fail because of some unintended
  4. K = projects that fail because they are killed/terminated without completing.
  5. S = projects that fail after deployment because they are not sustainable.

Point 4:  Even with a crisp definition of failure, the definition of a change project is often ambiguous.  Gibbons lists 10 categories to help group and define change projects:

  1. Strategy deployment
  2. Restructuring/downsizing
  3. Technology change
  4. Mixed change
  5. TQM (Six Sigma)
  6. Mergers and acquisitions
  7. Reengineering/process design
  8. Software development/installation
  9. Business expansion
  10. Culture change

Each of these change project categories has their own associated failure rates.  Culture changes generally have the worst success rates of all types of change projects.

Later in this chapter, Gibbons uses the high failure rate of change projects to launch another topic — moving from Change Management to Change Leadership.¬† It is Gibbons position that change management knowledge and skills need to be more widespread across the management team. The need is driven by the fact that leading and managing change is one common activity at every level of the organization.¬† Gibbons states the need as:

‚ÄúThe amount of change that managers deal with, the high failure rates of programmatic change, and the constant challenges of continuous change suggest that failed (or failing) change is the single largest preventable cost to the business.‚ÄĚ (p. 26)

Change is Continuous

A key point Gibbons makes about change is that change is constant.¬† This observation mirrors my professional experience. This means that change is NOT a single episodic disruption to a business that occurs then everything settles down to a status quo.¬† The change model of ‚Äúunfreeze, change, freeze‚ÄĚ no longer applies, unless you are decorating ice-cubes.

Gibbons previews many of the topics covered in this book by listing several ‚ÄúChange Myths.‚Ä̬† He includes the chapter that the myth will be debunked and/or challenged.

I challenge you to pick one of these Change Myths (pages 28 ‚Äď 29) that you think cannot be false.¬† Many of these Change Myths are commonly accepted truths about change and/or human behavior.

My pick – ‚ÄúInvolving many people slows progress‚ÄĚ.
Chapter 8 should shed light for me about this Change Myth.

Next week we look at the Part I Overview ‚ÄúChange-Agility and Chapter 2 ‚ÄúFrom Change Fragility to Change-Agility‚ÄĚ.

Previous entries in the re-read of the book The Science of Successful Organizational Change (buy a copy!)

Week 1: Game Plan

Week 2: Introduction

 

 


Categories: Process Management

Quote of the Day

Herding Cats - Glen Alleman - Sat, 07/22/2017 - 21:51

Before we build a better mousetrap, we need to find out if there are any mice out there. - Yogi Berra

Categories: Project Management

Pi Day (Again)

Herding Cats - Glen Alleman - Sat, 07/22/2017 - 15:05

22/7 =3.14285714286

DFV6iiBWAAAjQLu

Categories: Project Management

GAO Cost Estimating and Assessment Guide Applied to Agile

Herding Cats - Glen Alleman - Fri, 07/21/2017 - 21:12

The GAO Cost Estimating and Assessment Guide has 12 steps. These describe the increasing maturity of the project's artifacts. They are not specific to Agile Software Development. But here's how they are connected

They are not specific to Agile Software Development and can be applied to any project development lifecycle. But here's how they are connected

Here's how they are connected If any of these pieces is missing the probability of the project's success to reduced.

Step  GAO Agile 1 Capture All Activities

Product Roadmap and Release Plan describe the needed capabilities to be delivered by the project.

These capabilities are connected to the business strategy.

They implement that strategy through the Features that are decomposed from the Capabilities in the Release Plan.

2 Sequence These Activities

The order of the needed Capabilities in defined in the business strategy. 

This is connected to the Financial Plan for earning back the investment of the development. 

This financial management process is the basis of decision making for business. ROI and breakeven dates are part of managing any business for profit.

3 Assign Resources to These Activities Agile teams, for the most part, are a fixed set of resources, so the spending plan is essentially Flat 4 Establish Duration for these Activities

 Business runs on the time value of money. Business has a fiduciary need to know when Value will start being accrued. 

This is the role of the Product Roadmap and Release Plan. Either a Cadence Release Plan or a Capabilities Release Plan.

Again, this is the basis of Managerial Finance.

5 Verify Schedule is Traceable Horizontally and Vertically

The delivery of Value to the Release Plan and the Product Roadmap is Vertical. 

The product of Value from the Product Backlog to the Sprints is Horizontal 

6 Confirm Critical Path - The Schedule Matches the Program's Needs Agile doesn't have the formal notion of a critical path, but there are critical features that are needed for the Capabilities. These need to appear at critical times for the business to accrue the planned benefits to fullfill the business plan 7 Ensure Reasonable Total Float

 All project work operates in the presence of uncertainty. Uncertainty creates risk.

Risk Management is how Adults Manage Projects - Tim Lister

Be an adult and provide margin for all work.

No margin? You're late and over budget before you start, and have lowered the probability of technical success.

8 Conduct Schedule Risk Analysis

 Uncertanty creates Risk.

Agile is a participant in Risk Management, but Agile is NOT Risk Management. 

Risk Management has 6 processes, see SEI Continuous Risk Management

https://goo.gl/brVght

9 Update Schedule with Actual Progress

 Physical Percent Complete is the only measure of progress to Plan. 

Agile has many Plans, Product Roadmap, Release Plan, Product Backlog, Sprint Plan.

Each Plan must have some measure of progress. Story Points are NOT measures of progress because they are Ordinal. Mesures of progress must be Cardinal

10 Maintain the Baseline with Repeatable Processes

Agile encourages change, but those changes must be recorded so a reference class can be built of the time and effort it has taken to develop the Features.

These Features can then be collected into a Feature Breakdown Structure (FBS) and used to estimate future features as Reference Classes.

Categories: Project Management

Stuff The Internet Says On Scalability For July 21st, 2017

Hey, it's HighScalability time:

Afraid of AI? Fire ants have sticky pads so they can form rafts, build towers, cross streams, & order takeout. We can CRISPR these guys to fight Skynet. (video, video, paper)
If you like this sort of Stuff then please support me on Patreon.

 

  • 222x: Bitcoin less efficient than a physical system of metal coins and paper/fabric/plastic; #1: Python use amongst Spectrum readers; 3x: time spent in apps that don't make us happy; 1 million: DigitalOcean users; 11.6 million: barrels of oil a day saved via tech and BigData; 200,000: cores on Cray super computer;$200B: games software/hardware revenue by 2021; $3K: for 50 Teraflops AMD Vega Deep Learning Box; 24.4 Gigawatts: China New Solar In First Half Of 2017; 

  • Quotable Quotes:
    • sidlls: I think instead there is a category error being made: that CS is an appropriate degree (on its own) to become a software engineer. It's like suggesting a BS in Physics qualifies somebody to work as an engineer building a satellite.
    • Elon Musk: AI is a fundamental existential risk for human civilization, and I don’t think people fully appreciate that
    • Mike Elgan: Thanks to machine learning, it's now possible to create a million different sensors in software using only one actual sensor -- the camera.
    • Amin Vahdat (Google): The Internet is no longer about just finding a path, any path, between a pair of servers, but actually taking advantage of the rich connectivity to deliver the highest levels of availability, the best performance, the lowest latency. Knowing this, how you would design protocols is now qualitatively shifted away from pairwise decisions to more global views.
    • naasking: You overestimate AI. Incompleteness is everywhere in CS. Overcoming these limitations is not trivial at all.
    • 451: Research believes serverless is poised to undergo a round of price cutting this year.
    • Nicholas Bloom: We found massive, massive improvement in performance—a 13% improvement in performance from people working at home
    • @CoolSWEng: "A Java new operation almost guarantees a cache miss. Get rid of them and you'll get C-like performance." - @cliff_click #jcrete
    • DarkNetMarkets: We're literally funding our own investigation. 
    • Tristan Harris: By shaping the menus we pick from, technology hijacks the way we perceive our choices and replaces them with new ones. But the closer we pay attention to the options we’re given, the more we’ll notice when they don’t actually align with our true needs.
    • xvaier: If I have one thing to tell anyone who is looking for business ideas to try out their new programming skills on, I strongly suggest taking the time to learn as much as possible about the people to whom you want to provide a solution, then recruiting one of them to help you build it, lest you become another project that solves a non-issue beautifully.
    • @sebgoa: Folks, there were schedulers before kubernetes. Let's get back down to earth quickly
    • Mark Shead: A finite state machine is a mathematical abstraction used to design algorithms. In simple terms, a state machine will read a series of inputs. When it reads an input it will switch to a different state. Each state specifies which state to switch for a given input. This sounds complicated but it is really quite simple.
    • xantrel: I started a small business that started to grow, I thought I had to migrate to AWS and increase my cost by 5xs eventually, but so far Digital Ocean with their hosted products and block storage has handled the load amazingly well.
    • danluu: when I’m asked to look at a cache related performance bug, it’s usually due to the kind of thing we just talked about: conflict misses that prevent us from using our full cache effectively6. This isn’t the only way for that to happen – bank conflicts and and false dependencies are also common problems
    • Charles Hoskinson: People say ICOs (Initial Coin Offering) are great for Ethereum because, look at the price, but it’s a ticking time-bomb. There’s an over-tokenization of things as companies are issuing tokens when the same tasks can be achieved with existing blockchains. People are blinded by fast and easy money.
    • Charles Schwab: There don't seem to be any classic bubbles near bursting at the moment—at least not among the ones most commonly referenced as potential candidates.
    • Sertac Karaman: We are finding that this new approach to programming robots, which involves thinking about hardware and algorithms jointly, is key to scaling them down.
    • Michael Elling: When do people wake up and say that we’ve moved full circle back to something that looks like the hierarchy of the old PSTN? Just like the circularity of processing, no?
    • Benedict Evans: Content and access to content was a strategic lever for technology. I’m not sure how much this is true anymore.  Music and books don’t matter much to tech anymore, and TV probably won’t matter much either. 
    • SeaChangeViaExascaleOnDown: Currently systems are still based around mostly separately packaged processor elements(CPUs, GPUs, and other) processors but there will be an evolution towards putting all these separate processors on MCMs or Silicon Interposers, with silicon interposers able to have the maximum amount of parallel traces(And added active circuitry) over any other technology.
    • BoiledCabbage: Call me naive, but am I the only one who looks at mining as one of the worst inventions for consuming energy possible?
    • Amin Vahdat (Google):  Putting it differently, a lot of software has been written to assume slow networks. That means if you make the network a lot faster, in many cases the software can’t take advantage of it because the software becomes the bottleneck.

  • Dropbox has 1.3 million lines of Go code, 500 million users, 500 petabytes of user data, 200,000 business customers, and a multi-exabyte Go storage system. Go Reliability and Durability at Dropbox. They use it for: RAT: rate limiting and throttling; HAT: memcached replacement; AFS: file system to replace global Zookeeper; Edgestore: distributed database; Bolt: for messaging; DBmanager: for automation and monitoring of Dropbox’s 6,000+ databases; “Jetstream”, “Telescope”, block routing, and many more. The good: Go is productive, easy to write and consume services, good standard library, good debugging tools. The less good: dealing with race conditions.

  • Professor Jordi Puig-Suari talks about the invention of CubeSat on embedded.fm. 195: A BUNCH OF SPUTNIKS. Fascinating story of how thinking different created a new satellite industry. The project wasn't on anyone's technology roadmap, nobody knew they needed it, it just happened. A bunch of really bright students, in a highly constrained environment, didn't have enough resources to do anything interesting, so they couldn't build spacecraft conventionally. Not knowing what you're doing is an advantage in highly innovative environments. The students took more risk and eliminated redundancies. One battery. One radio. Taking a risk that things can go wrong. They looked for the highest performance components they could find, these were commercial off the shelf components that when launched into space actually worked. The mainline space industry couldn't take these sort of risks. Industry started paying attention because the higher performing, lower cost components, even with the higher risk, changed the value proposition completely. You can make it up with numbers. You can launch 50 satellites for the cost of one traditional satellite. Sound familiar? Cloud computing is based on this same insight. Modern datacenters have been created on commodity parts and how low cost miniaturized parts driven by smartphones have created whole new industries. CubeSats' had a standard size, so launch vehicles could standardize also, it didn't matter where the satellites came from, they could be launched. Sound familiar? This is the modularization of the satellite launching, the same force that drives all mass commercialization. Now the same ideas are being applied to bigger and bigger spacecraft. It's now a vibrant industry. Learning happens more quickly because they get to fly more. Sound familiar? Agile, iterative software development is the dominant methodology today. 

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Three Leadership Tips from Jack Sparow

Herding Cats - Glen Alleman - Fri, 07/21/2017 - 13:41

Jack_SparrowCaptain Jack Sparrow, started as a supporting character to Orlando Bloom’s Wil Turner. He then outshone the ostensible hero and became an archetype unto himself. Here are three Leadership Tips from an INCOSE presentation "We Need More Jack Sparrow, Savvy? A Swashbuckler’s Guide to System Modeling With SysML," given at the 2016 International Council on Systems Engineering Great Lakes Regional Conference.

These tips are then applied to managing projects in the presence of uncertainty.

  1. There should be a ‚Äėcaptain‚Äô in there somewhere - While it is one of Captain Jack's many quirks, he never fails to remind others of his qualifications. You don‚Äôt want to alienate your team or co-workers with arrogance, but you can subtly remind others who are in charge of your confidence, vision, and passion. Once that leadership and vision are displayed, your employees will know what‚Äôs expected and have an example to follow. But this confidence¬†must be anchored in knowledge, skill, and experience and demonstrated¬†with tangible¬†outcomes. No¬†bloviating. You must be able to¬†walk the walk of a project manager as well as¬†talk the talk of¬†a¬†project¬†manager.¬†Knowing and¬†Doing¬†are two separate¬†things.¬†Knowing without doing provides principles, but no processes or practices. Doing¬†without knowing, creates chaos.
  2. You know that feeling you get when you’re standing in a high place…sudden urge to jump? I don’t have it - If there is one thing Jack is good at, it’s improvising and going with his gut. A good Project Manager has the instincts and self-confidence to know when to move forward, no matter the direction of the tides or even against the advice of naysayers. A good Project Manager also has the instincts to know when to stay put and lay low, not taking the leap. This needs to be backed up with data, but trusting the data is not enough. Good project managers have the ability to take calculated risks. Informed by data, but risks all the same. And when to place it safe.
  3. I thought I should give you a warning. We‚Äôre taking the ship. It‚Äôs nothing personal - Honest pirates are hard to come by, honest project manager should not be. It‚Äôs never easy to deliver bad news, but hearing it up front and from your own lips makes it easier for the team, customers, managers, and business leaders to digest and move forward. You need to remind them that decisions are based on what‚Äôs best for the project, the customer, and the firm ‚Äď they‚Äôre not personal! Remember, sometimes you have to do what‚Äôs best in the long-run, and not what‚Äôs necessarily popular right now.
Related articles Applying the Right Ideas to the Wrong Problem Estimating Processes in Support of Economic Analysis Root Cause of Project Failure
Categories: Project Management

Iteration Planning: Common Variations on Standard User Stories

Proof First!

Iteration planning would be far simpler if every story was started and completed during a single iteration, if every story was independent, or deadlines did not pop up messing up carefully crafted prioritization routines.  Unfortunately, real life is a bit messier.  Having a strategy to handle variations from the best case makes life easier.  A few of the common planning glitches are:

Expedited Stories ‚Äď Most teams adopt some form of prioritization technique (examples include weighted shortest job first, hardest part first, and highest business value). Expediting a piece of work includes addressing the item out of order or asking the team to accelerate the item so it can be completed faster. The simplest coping mechanism is to have the product owner (and the person asking that the work is expedited, if different) identify which planned work items should be put on hold to accept the expedited work. ¬†Expediting work items puts stress on the team and often sends a message that lack of planning is acceptable.

Deadlines ‚Äď Dealing with deadlines (sometimes emergent) is a variation of expedited work. ¬†The easiest coping mechanism is to build deadlines into the prioritization and grooming process so that deadlines are accounted for early in the planning process. ¬†Problems crop up when deadlines are accepted without regard for team capacity. ¬†Trying to deliver more work than is possible will lead to a wide range of poor outcomes. ¬†Another problem occurs when deadlines are set that are not based on need. ¬†Artificial constraints can lead to the ‚Äúboy who cried wolf syndrome‚ÄĚ where real deadlines are ignored.

Carryover Work ‚Äď While many will find it shocking, not every piece of work accepted into an iteration and started is always completed during the iteration. ¬†The uncompleted work should be groomed, prioritized, planned and accepted into the next iteration just like any other piece of work. Often teams fail to reassess and groom uncompleted/carryover work and put these items at the top of the work being accepted in the next iteration. ¬†Actively reassessing and grooming carryover stories is important to ensure that the story is properly formed and that if new information was uncovered that the stories are sized and prioritized properly.

Trivial Tasks ‚Äď Accepting work that is of little or no value into the iteration should be avoided. ¬†Well-formed user stories include the benefit/value they should deliver. ¬†This should be true for a story delivering user functionality, an architecture component, technical items or even correcting defects. ¬†If work has no discernable value ask the question, why are we doing this?

Recurring Tasks ‚Äď Recurring task should generally be incorporated into the process (for example checking code in on a daily basis) or into the definition of done (for example, system testing or security testing). ¬†Writing stories for recurring tasks is akin to implementing a waterfall approach with an Agile wrapper.

Spikes РA spike is a type of user story that is used to answer a question, gather information, perform a specific piece of basic research, address project risks or break a large story down. Fundamentally, spikes are used to address uncertainty by gathering specific information to understand a functional or technical requirement. For example, when the team needs to prove a specific technical problem or does not have enough information to estimate the story they would use a spike.  Spikes are an important tool that every Agile team should use when needed, however, they should not be overused.  Teams that over use spikes generally are missing a significant knowledge component.  Find a source for the missing knowledge and add that knowledge source to the team or leverage the source as a subject matter expert/consultant.

Work gets to an Agile team in a variety of forms and in the order originally planned.  Stomping your feet and saying no is rarely the right answer.  Each common variation in process and flow can be addressed if anticipated and if the consequences are known and accepted by the whole team and their stakeholders.  

 


Categories: Process Management

Seccomp filter in Android O

Android Developers Blog - Thu, 07/20/2017 - 22:10
Posted by Paul Lawrence, Android Security Engineer
In Android-powered devices, the kernel does the heavy lifting to enforce the Android security model. As the security team has worked to harden Android's userspace and isolate and deprivilege processes, the kernel has become the focus of more security attacks. System calls are a common way for attackers to target the kernel.
All Android software communicates with the Linux kernel using system calls, or syscalls for short. The kernel provides many device- and SOC-specific syscalls that allow userspace processes, including apps, to directly interact with the kernel. All apps rely on this mechanism to access collections of behavior indexed by unique system calls, such as opening a file or sending a Binder message. However, many of these syscalls are not used or officially supported by Android.
Android O takes advantage of a Linux feature called seccomp that makes unused system calls inaccessible to application software. Because these syscalls cannot be accessed by apps, they can't be exploited by potentially harmful apps.
seccomp filter Android O includes a single seccomp filter installed into zygote, the process from which all the Android applications are derived. Because the filter is installed into zygote‚ÄĒand therefore all apps‚ÄĒthe Android security team took extra caution to not break existing apps. The seccomp filter allows:
  • all the syscalls exposed via bionic (the C runtime for Android). These are defined in bionic/libc/SYSCALLS.TXT.
  • syscalls to allow Android to boot
  • syscalls used by popular Android applications, as determined by running Google's full app compatibility suite
Android O's seccomp filter blocks certain syscalls, such as swapon/swapoff, which have been implicated in some security attacks, and the key control syscalls, which are not useful to apps. In total, the filter blocks 17 of 271 syscalls in arm64 and 70 of 364 in arm.

Developers Test your app for illegal syscalls on a device running Android O.
Detecting an illegal syscall In Android O, the system crashes an app that uses an illegal syscall. The log printout shows the illegal syscall, for example:
03-09 16:39:32.122 15107 15107 I crash_dump32: performing dump of process 14942 (target tid = 14971)
03-09 16:39:32.127 15107 15107 F DEBUG   : *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
03-09 16:39:32.127 15107 15107 F DEBUG   : Build fingerprint: 'google/sailfish/sailfish:O/OPP1.170223.013/3795621:userdebug/dev-keys'
03-09 16:39:32.127 15107 15107 F DEBUG   : Revision: '0'
03-09 16:39:32.127 15107 15107 F DEBUG   : ABI: 'arm'
03-09 16:39:32.127 15107 15107 F DEBUG   : pid: 14942, tid: 14971, name: WorkHandler  >>> com.redacted <<<
03-09 16:39:32.127 15107 15107 F DEBUG   : signal 31 (SIGSYS), code 1 (SYS_SECCOMP), fault addr --------
03-09 16:39:32.127 15107 15107 F DEBUG   : Cause: seccomp prevented call to disallowed system call 55
03-09 16:39:32.127 15107 15107 F DEBUG   :     r0 00000091  r1 00000007  r2 ccd8c008  r3 00000001
03-09 16:39:32.127 15107 15107 F DEBUG   :     r4 00000000  r5 00000000  r6 00000000  r7 00000037
Affected developers should rework their apps to not call the illegal syscall.
Toggling seccomp filters during testing In addition to logging errors, the seccomp installer respects setenforce on devices running userdebug and eng builds, which allows you to test whether seccomp is responsible for an issue. If you type:
adb shell setenforce 0 && adb stop && adb start
then no seccomp policy will be installed into zygote. Because you cannot remove a seccomp policy from a running process, you have to restart the shell for this option to take effect.
Device manufacturers Because Android O includes the relevant seccomp filters at //bionic/libc/seccomp, device manufacturers don't need to do any additional implementation. However, there is a CTS test that checks for seccomp at //cts/tests/tests/security/jni/android_security_cts_SeccompTest.cpp. The test checks that add_key and keyctl syscalls are blocked and openat is allowed, along with some app-specific syscalls that must be present for compatibility.
Categories: Programming

Why Johnny Still Cannot Estimate

Herding Cats - Glen Alleman - Thu, 07/20/2017 - 18:40
  • He doesn't know how -¬†He doesn't understand how estimates fit into the process of business and managerial finance of product or service development. He looks for existing examples and just sees bad examples.¬†
  • He doesn't understand¬†why estimates are needed -¬†He doesn't understand the impact on the business for not knowing how long, how much, and what will be produced for the time and money. These are scarce resources and decision making in the presence of scarce resources is called Microeconomics. Add uncertainty to these resources and it's still called Microeconomics. Those assigned to estimate do not relate to cost, schedules, or resources that drive the business and the decision made by the business by those estimates.
  • He'd rather be doing something else -¬†He would rather be coding. There is a strong tendency to jump from some¬†high-level functional notion of what the software should do to the coding, without further definition of the effort and duration to¬†do that coding. Coding work is much more fun than making estimates, documenting the requirements, writing tests.
  • He sees no reward in it -¬†He doesn't care. No matter what estimates he makes, he will be battered by management and those who could not or would not take the time to participate in the development of the estimate. There is no reward for doing a good job.

All of these and more have NOTHING to do with the principle of making decisions in the presence of uncertainty. That is still in place.

There is NO Means

Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Herding Cats: GAO Cost Estimating and Assessment Guide Applied to Agile Herding Cats: Decisions Without Estimates? Why Guessing is not Estimating and Estimating is not Guessing Making Conjectures Without Testable Outcomes Estimating Processes in Support of Economic Analysis Capabilities Based Planning
Categories: Project Management

Code Health: To Comment or Not to Comment?

Google Testing Blog - Wed, 07/19/2017 - 22:17
This is another post in our Code Health series. A version of this post originally appeared in Google bathrooms worldwide as a Google Testing on the Toilet episode. You can download a printer-friendly version to display in your office.

By Dori Reuveni and Kevin Bourrillion

While reading code, often there is nothing more helpful than a well-placed comment. However, comments are not always good. Sometimes the need for a comment can be a sign that the code should be refactored.

Use a comment when it is infeasible to make your code self-explanatory. If you think you need a comment to explain what a piece of code does, first try one of the following:
  • Introduce an explaining variable.
    // Subtract discount from price.
    finalPrice = (numItems * itemPrice)
    - min(5, numItems) * itemPrice * 0.1;
    price = numItems * itemPrice;
    discount =
    min(5, numItems) * itemPrice * 0.1;
    finalPrice = price - discount;
  • Extract a method.
    // Filter offensive words.
    for (String word : words) { ... }
    filterOffensiveWords(words);
  • Use a more descriptive identifier name.
    int width = ...; // Width in pixels.
    int widthInPixels = ...;
  • Add a check in case your code has assumptions.
    // Safe since height is always > 0.
    return width / height;
    checkArgument(height > 0);
    return width / height;
There are cases where a comment can be helpful:
  • Reveal your intent: explain why the code does something (as opposed to what it does).
    // Compute once because it’s expensive.
  • Protect a well-meaning future editor from mistakenly ‚Äúfixing‚ÄĚ your code.
    // Create a new Foo instance because Foo is not thread-safe.
  • Clarification: a question that came up during code review or that readers of the code might have.
    // Note that order matters because...
  • Explain your rationale for what looks like a bad software engineering practice.
    @SuppressWarnings("unchecked") // The cast is safe because...
On the other hand, avoid comments that just repeat what the code does. These are just noise:
// Get all users.
userService.getAllUsers();
// Check if the name is empty.
if (name.isEmpty()) { ... }
Categories: Testing & QA

New security protections to reduce risk from unverified apps

Google Code Blog - Wed, 07/19/2017 - 20:45
Originally posted by Naveen Agarwal, Identity team and Wesley Chun (@wescpy), Developer Advocate, G Suite on the G Suite Developers Blog

We're constantly working to secure our users and their data. Earlier this year, we detailed some of our latest anti-phishing tools and rolled-out developer-focused updates to our app publishing processes, risk assessment systems, and user-facing consent pages. Most recently, we introduced OAuth apps whitelisting in G Suite to enable admins to choose exactly which third-party apps can access user data.

Over the past few months, we've required that some new web applications go through a verification process prior to launch based upon a dynamic risk assessment.

Today, we're expanding upon that foundation, and introducing additional protections: bolder warnings to inform users about newly created web apps and Apps Scripts that are pending verification. Additionally, the changes we're making will improve the developer experience. In the coming months, we will begin expanding the verification process and the new warnings to existing apps as well.

Protecting against unverified apps

Beginning today, we're rolling out an "unverified app" screen for newly created web applications and Apps Scripts that require verification. This new screen replaces the "error" page that developers and users of unverified web apps receive today.

The "unverified app" screen precedes the permissions consent screen for the app and lets potential users know that the app has yet to be verified. This will help reduce the risk of user data being phished by bad actors.

The "unverified app" consent flow

This new notice will also help developers test their apps more easily. Since users can choose to acknowledge the 'unverified app' alert, developers can now test their applications without having to go through the OAuth client verification process first (see our earlier post for details).

Developers can follow the steps laid out in this help center article to begin the verification process to remove the interstitial and prepare your app for launch.

Extending security protections to Google Apps Script

We're also extending these same protections to Apps Script. Beginning this week, new Apps Scripts requesting OAuth access to data from consumers or from users in other domains may also see the "unverified app" screen. For more information about how these changes affect Apps Script developers and users, see the verification documentation page.

Apps Script is proactively protecting users from abusive apps in other ways as well. Users will see new cautionary language reminding them to "consider whether you trust" an application before granting OAuth access, as well as a banner identifying web pages and forms created by other users.

Updated Apps Script pre-OAuth alert with cautionary languageApps Script user-generated content banner Extending protections to existing apps

In the coming months, we will continue to enhance user protections by extending the verification process beyond newly created apps, to existing apps as well. As a part of this expansion, developers of some current apps may be required to go through the verification flow.

To help ensure a smooth transition, we recommend developers verify that their contact information is up-to-date. In the Google Cloud Console, developers should ensure that the appropriate and monitored accounts are granted either the project owner or billing account admin IAM role. For help with granting IAM roles, see this help center article.

In the API manager, developers should ensure that their OAuth consent screen configuration is accurate and up-to-date. For help with configuring the consent screen, see this help center article.

We're committed to fostering a healthy ecosystem for both users and developers. These new notices will inform users automatically if they may be at risk, enabling them to make informed decisions to keep their information safe, and will make it easier to test and develop apps for developers.

.blogimage img { width: 50%: margin: 0; border: 0; padding: 20px 0 0 0; display: block; } .smalltext { text-align: center; font-style: italic; font-size; 75%; margin: 0; border: 0; padding: 0px 0 20px 0; }
Categories: Programming

Iteration Planning Meeting: Simple Checklist

Plan or CRASH!

At some point planning for planning needs to give way to planning.  Planning identifies a goal and helps to envision the steps needed to attain that goal. In Agile, the planning event also sends a message about the amount of work a team anticipates delivering in an iteration. While every team faces variations based on context and the work that is in front of them, a basic planning process is encapsulated in the following simple checklist.

Planning Preamble:  

As the team gets settled and before they leap into breaking work down into smaller stories, activities,and tasks the facilitator (e.g. Scrum Master in Scrum) should remind the team of the basics of planning, the ground rules, and expected outcome. The basics include:

___  How long the planning session will last (everything is time-boxed)

___  The time frame the team is planning for (e,g, a day, two weeks or several iterations).s

___  Everybody writes and everybody needs to participate.  

Not using a scribe tends to keep people involved. Also having everyone involved in the recording will help ensure that the records are not the sole responsibility of a single person which is a potential future constraint.

___ Decide on the planning order.

Planning order is often addressed in grooming, by organizational and/or team culture, or up front as part of the chartering. Review  the order and the rationale for it.  The three most prevalent ordering strategies are:
1. Hardest Part First
2. Highest Value First
3. Dependency Driven (those that can’t be avoided)

___ Definition of Done

Everyone in the planning should agree what the base components of done mean and whether anything out of the ordinary needs to be kept in mind during planning.  Remind everyone that done is not the same as acceptance criteria.

Business Context:

Providing business context is similar to reviewing the basics. ¬†Business context serves many purposes, including motivation and answering the question of ‚Äúwhy?‚ÄĚ for the team. ¬†Business context will help the team to make decisions more effectively and efficiently. ¬†Business context items include:

___ Iteration Theme.  

The theme defines the overall goal.

___ Business Constraints

Are there business constraints that should be accounted for during planning?  For example, sometimes functionality is needed on a specific date due to legal mandates or other events.  One client had to plan to deliver a prototype for an industry show.

___ Important Discussion From Grooming

Grooming, like any business conversation, is a data gathering event.  Always share relevant information with the whole team.  

Planning:  

Planning is a process of decomposing work into smaller pieces.  Teams begin with groomed user stories and then break them into tasks.  Breaking the work down allows team members to gain a better understanding of what need to be done and how much work can be accomplished during the iteration.  Breaking work down also allows the work to be spread across the team based on capabilities and so that the team can swarm to the work when there are issues.   Considerations and guidelines include:

___ Take the first cut by breaking stories into big tasks before getting into the details.

This step helps teams not to get frozen into planning paralysis and into specifying the solution early in the process.

___ Target a consistent level of granularity for tasks.  

I recommend all tasks be slightly less than a day’s effort which makes it easy to show progress or to identify roadblocks quickly.

___ Plan out loud.

Planning out loud helps to keep everyone involved, away from silo thinking, and makes sure planning stays reasonable.

___ Plan slightly over capacity.

The team should plan a bit more than their productivity or velocity indicates.  The over plan represents a stretch goal and allows the team to  be positioned to continue moving forward if progress is faster than anticipated.

___ Remember to set aside capacity for maintenance and support (if needed)

Wrap Up:

Running out of time is not a great way to end a planning session.  Two normal steps help to provide  the team with planning closure

___ Review and commit to the plan.

The team should be able to agree that they will accomplish the plan based on the planning exercise.  I suggest testing commitment as planning progresses. However if the team can not commit to the plan, the issue will need to be identified and another planning session convened to address the problem. Note: if the facilitator is testing commitment during the session, this should be a rare event.  When a team will not commit I generally find they have been forced to oover-commitby an external party.

___ Do a retrospective

As a team, identify one thing you can do better and commit to making the fix!

The simple planning session checklist is useful to get a team ready to plan, keep the team on track and to improve the process. I have used the checklist as a training tool and as a tool to help guide new facilitators.  

Next: dealing with common planning variations.    


Categories: Process Management

Welcome New Host Kishore Bhatia

We’re pleased to welcome Kishore Bhatia to SE Radio. Kishore is a developer at heart and currently works on solving enterprise business problems at scale using blockchains.¬†He leads the engineering team at BlockApps with new product development, infrastructure, platform engineering, and operations.¬†With 16+ years in software development (C/C++ on UNIX, Java/Web, distributed systems and DevOps […]
Categories: Programming

Shut the HAL Up

Android Developers Blog - Tue, 07/18/2017 - 18:00
Posted by Jeff Vander Stoep, Senior Software Engineer, Android Security

Updates are essential for security, but they can be difficult and expensive for device manufacturers. Project Treble is making updates easier by separating the underlying vendor implementation from the core Android framework. This modularization allows platform and vendor-provided components to be updated independently of each other. While easier and faster updates are awesome, Treble's increased modularity is also designed to improve security.

Isolating HALs A Hardware Abstraction Layer (HAL) provides an interface between device-agnostic code and device-specific hardware implementations. HALs are commonly packaged as shared libraries loaded directly into the process that requires hardware interaction. Security boundaries are enforced at the process level. Therefore, loading the HAL into a process means that the HAL is running in the same security context as the process it's loaded into.

The traditional method of running HALs in-process means that the process needs all the permissions required by each in-process HAL, including direct access to kernel drivers. Likewise, all HALs in a process have access to the same set of permissions as the rest of the process, including permissions required by other in-process HALs. This results in over-privileged processes and HALs that have access to permissions and hardware that they shouldn't.

Figure 1. Traditional method of multiple HALs in one process.

Moving HALs into their own processes better adheres to the principle of least privilege. This provides two distinct advantages:

  1. Each HAL runs in its own sandbox and is permitted access to only the hardware driver it controls and the permissions granted to the process are limited to the permissions required to do its job.
  2. Similarly, the process loses access to hardware drivers and other permissions and capabilities needed by the HALs.
Figure 2. Each HAL runs in its own process.

Moving HALs into their own processes is great for security, but it comes at the cost of increased IPC overhead between the client process and the HAL. Improvements to the binder driver made IPC between HALs and clients practical. Introducing scatter-gather into binder improves the performance of each transaction by removing the need for the serialization/deserialization steps and reducing the number of copy operations performed on data from three down to one. Android O also introduces binder domains to provide separate communication streams for vendor and platform components. Apps and the Android frameworks continue to use /dev/binder, but vendor-provided components now use /dev/vndbinder. Communication between the platform and vendor components must use /dev/hwbinder. Other means of IPC between platform and vendor are disallowed.

Case study: System Server

Many of the services offered to apps by the core Android OS are provided by the system server. As Android has grown, so has system server's responsibilities and permissions, making it an attractive target for an attacker. As part of project Treble, approximately 20 HALs were moved out of system server, including the HALs for sensors, GPS, fingerprint, Wi-Fi, and more. Previously, a compromise in any of those HALs would gain privileged system permissions, but in Android O, permissions are restricted to the subset needed by the specific HAL.

Case study: media frameworks

Efforts to harden the media stack in Android Nougat continued in Android O. In Nougat, mediaserver was split into multiple components to better adhere to the principle of least privilege, with audio hardware access restricted to audioserver, camera hardware access restricted to cameraserver, and so on. In Android O, most direct hardware access has been entirely removed from the media frameworks. For example HALs for audio, camera, and DRM have been moved out of audioserver, cameraserver, and drmserver respectively.

Reducing and isolating the attack surface of the kernel The Linux kernel is the primary enforcer of the security model on Android. Attempts to escape sandboxing mechanisms often involve attacking the kernel. An analysis of kernel vulnerabilities on Android showed that they overwhelmingly occurred in and were reached through hardware drivers.

De-privileging system server and the media frameworks is important because they interact directly with installed apps. Removing direct access to hardware drivers makes bugs difficult to reach and adds another layer of defense to Android's security model.

.blogimage img { width: 100%; margin: 0; border: 0; padding: 20px 0 0 0; } .blogcaption { text-align: center; padding: 10px 0 20px 0; } .blogimagefloat img { width: 60%; float: right; padding: 0 10px 0 0px; margin: 0; border: 0; }
Categories: Programming

SE-Radio Episode 297: Kieren James-Lubin on Blockchain

Kishore Bhatia talks with Kieren James-Lubin about Blockchains. Topics include Blockchains, Cryptocurrency, Bitcoin, Distributed Ledger, Decentralized Apps, Ethereum, Smart Contract development with Solidity, ICO‚Äôs and Tokens.   Related Links   IEEE search for blockchains Blockchain TED Talk Ethereum Solidity Smart Contracts Truffle for testing Smart Contracts Guest Twitter:¬†https://twitter.com/kjameslubin Guest Email: kieren@blockapps.net
Categories: Programming

Why the Whole Team Should Participate When Estimating

Mike Cohn's Blog - Tue, 07/18/2017 - 17:00

A well-established best practice is that those who will do the work, should estimate the work, rather than having an entirely separate group estimate the work.

But when an agile team estimates product backlog items, the team doesn’t yet know who will work on each item. Teams will usually make that determination either during iteration (sprint) planning or in a more real-time manner in daily standups.

This means the whole team should take part in estimating every product backlog item. But how can someone with a skill not needed to deliver a product backlog item contribute to estimating it?

Before I can answer that, I need to briefly describe Planning Poker, which is the most common approach for estimating product backlog items. If you are already familiar with Planning Poker, you can skip the next section.

Planning Poker

Planning Poker is a consensus-based, collaborative estimating approach. It starts when a product owner or key stakeholder reads to the team an item to be estimated. Team members are then encouraged to ask questions and discuss the item so they understand the work being estimated.

Each team member is holding a set of poker-style playing cards on which are written the valid estimates to be used by the team. Any values are possible, but it is generally advisable to avoid being too precise. For example, estimating one item as 99 and another as 100 seems extremely difficult as a 1% increase in effort seems impossible to distinguish. Commonly used values are 1, 2, 3, 5, 8, 13, 20, 40, and 100 (a modified Fibonacci sequence) and 1, 2, 4, 8, 16, and 32 (a simple doubling of each prior value).

Once the team members are satisfied they understand the item to be estimated, each estimator  selects a card reflecting their estimate. All of the estimators then reveal their cards at the same time. If all the cards show  the same value, that becomes the team’s estimate of the work involved. If not, the estimators discuss their estimates with an emphasis on hearing from those with the highest and lowest values.

If you aren’t familiar with estimating product backlog items this way, you may want to read more about Planning Poker before continuing.

How Can Someone Participate Without the Needed Skills

Equipped with a common understanding of Planning Poker, let’s see how a team member can contribute to estimating work that they cannot possibly be involved in. As an example, consider a database engineer who is being asked to estimate a product backlog item that will include front-end JavaScript and some backend Ruby on Rails coding, and will then need to be tested.

How can this database engineer contribute to estimating this product backlog item?

There are three reasons why it’s possible--and desirable.

1. Planning Poker Isn’t Voting

When playing Planning Poker, participants are not voting on their preferred estimate. The team will not settle on the estimate that gets the most votes. Instead, each estimator is given the credibility they deserve. If one programmer wrote the original code that needs to be modified and happened to be in that same code a couple of days ago, the team should give more credence to that programmer’s estimate than to the estimate of a programmer who has never been in this part of the system.

This means that each team member can estimate, but that the team will weigh more heavily the opinions of those more closely aligned with the work.

2. Estimates Are Relative and That’s Easier

In Planning Poker, the estimates created should be relative rather than absolute estimates. That is, a team will say things like, “This item will take twice as long as the other item, but we can’t estimate the actual number of hours for either item.”

For example, this blog post contains one illustration. I provided my artist with a short description of what I had in mind for an image and he created the illustration. Most of my blog posts have one title illustration. Even though I have no artistic skill, I could estimate the work to create those illustrations as about equal each week.

Sure, some illustrations are more involved, and others can reuse a few elements from a past illustration. But most are close enough that I could estimate them as taking the same effort.

But some blog posts have two images. Even though I have no design skills at all, I’m willing to say that creating two images will take about twice as long as creating one image.

So a tester is not being asked to estimate how many hours it will take a programmer to code something. Instead the tester is estimating coding that thing relative to other things.

That can still be hard but relative estimates are easier than absolute estimates. And remember that because of point one above, the person whose skills may not be needed on the story will not be given as much credibility as someone whose skills will be used.

3. Everyone Contributes, Even If They Don’t Estimate

I want everyone on the team to participate in an estimating meeting. But that does not mean everyone estimates every item.

Despite relative estimating being easier than absolute estimating, there will still be times when someone will not be able to estimate a particular product backlog item. This might be because the person’s skills aren’t needed on that item. But that person may still be able to contribute to the discussion.

Sometimes the person whose skills are not needed on a given product backlog item will be the most astute in asking questions about the item, uncovering overlooked assumptions, or in seeing work that others on the team have missed.

For example, a database developer whose skills are not needed to deliver a product backlog item may be the one who remembers that:

The team promised to clean up that code the next time they were in it

There is an impact on reports that no one has considered

That when the team did a similar story a year ago it took much longer than anyone anticipated

and so on. The database developer may know these things or ask these questions even if unable to personally estimate their impact on the work.

A Few Examples

To provide a few examples, let’s return to role of a database engineer in estimating a product backlog item that has no database work. Here are some examples of things that team member might say that would add value to estimating that product backlog item:

  • “I’m holding up this high estimate because this sounded like a lot of effort to code. It sounded like about twice as much as this other backlog item.”

    In this case, the database engineer is making a relative assessment of effort. This will presumably be based on things said by coders. In some cases, the database engineer may be wrong in that assessment. But that doesn’t mean the person’s opinion is always without merit. The database engineer’s opinion should be given the merit it deserves (which may be great or little).

  • “Are you sure it’s that much work? I thought two sprints ago, you programmers were going to refactor that part of the system. If that happened, isn’t this easier now?”

    Here the database engineer is bringing up information that others may not have recalled or considered. It may or may not be of value. But sometimes it will be.

  • “Are you sure it’s not more work than that? Have you considered the need to do this and that?”

    In this case, the database engineer is pointing out work that the others may have overlooked. If that work is significant, it should be reflected in the estimate.
When Everyone Participates, It Increases Buy In

There’s one final reason why I suggest the whole team participate when estimating product backlog items, especially with a technique such as Planning Poker: Doing so increases the buy in felt by all team members to the estimates.

When someone else estimates something for you or me, we don’t feel invested in that estimate. It may or may not be a good estimate. But if it’s not, that’s not our fault. We will do much more to meet an estimate we gave than one handed to us.

We want everyone on a team to participate in estimating that team’s work. You never know in advance who will ask the insightful questions about a product backlog item. Sometimes it’s one of the team members who will work on that item. But other times those questions come from someone whose skills are not needed on that item.

So while not every team member needs to provide an estimate for each item, every team member does need to participate in the discussion surrounding every estimate. Teams do best when the whole team works together for the good of the product, from estimation through to delivery.

What Has Your Experience Been?

What has your experience been with involving the whole team in estimating product backlog items? Have you found it beneficial to have everyone participate even though not everyone has skills needed on each item?

Quote of the Day

Herding Cats - Glen Alleman - Tue, 07/18/2017 - 15:01

You cannot know the value unless you know the cost to acquire that value

Focusing on Value alone and Ignoring the cost to acquire that Value, leads to disappointment when you discover you paid too much, for too little, too late

Categories: Project Management

What is an Estimate? What is Estimating?

Herding Cats - Glen Alleman - Tue, 07/18/2017 - 03:28

Screen Shot 2017-07-17 at 12.54.56 PMI work in a domain where estimates are made every single week. Estimate to Complete (ETC), Estimate at Completion (EAC), Estimated Completion Date (ECD) are the life blood of our software intensive system of systems programs. To the left is a typical SISoS we work.

Embedded systems, data processing, image processes, web interfaces, backend databases, networking of collections of devices on the ground and in the air, training systems, logistics systems, maintenance and testing systems.

The management of SISoS is really no different than the management of any other enterprise class software system. People, Processes, and Tools all interacting in complex and complicated ways. 

This is a normal environment for mission critical, must work types of programs I'm on as the Program Architect.

There are several partitions of this information that are common in building the Performance Measurement Baseline (PMB). The PMB is a time phased, budgeted description of the project. In traditional programs, this is an Integrated Master Plan and Integrated Master Schedule, with budgets laid into the Work Packages. In Agile the Product Roadmap and Release Plan are the basis of the PMB.

Screen Shot 2017-07-17 at 12.58.59 PM

Since all projects operate in the presence of uncertainty, with the resulting risk - estimates are needed to make decisions that impact the future.

In the #NoEstimates paradigm, the term estimate is redefined to be Forecast and relabeled as NOT Estimating. This, of course, is nonsense, since estimates are about the past, present, and future. When past data is used, empirical estimating is the result. Estimates can be built with this empirical data, but models - parametric, Monte Carlo, Method of Moments - can also be the basis for estimating.

But Estimating is NOT Guessing. Guess is To suppose (something) without sufficient information to be sure of being correct.

So now to the point - What is an Estimate?

I belong to several professional cost estimating organizations that provide guidance and standards

The generic definition of an estimate is straight forward,  simple minded, and correct

A value that is close enough to the right answer, developed with some thought or calculation.

This allows that Value to be called what ever you want if you really need to redefine an estimate. But estimates are about the past, present, and future in the presence of uncertainty.

A better definition is 

The process of predicting the most realistic cost, effort, or techncial oucome required to complete a software project. 

But no matter what you decide to call what you do, to avoid calling it estimating, there is simply no way to make a decision in the presence of uncertainty without estimating the impact of that decision on your project. You can certainly make a decision without estimates, but you'll have no idea what will happen until it's happened. So actually you can decide without estimates. But since uncertainty is the creator of risk, you'll have not complied with Tim Lister's direction.

Risk Management is How Adults Manage Projects

So like the phrase we yes many times

What's the difference between our program and the Boy Scouts? The Boy Scouts have adult supervision

Anyone telling you can decide without estimating is either working on de minimus projects (no one cares) or selling you a hoax. 

 

 

 

 

Categories: Project Management

A Notion Misused, But Likely Never Read

Herding Cats - Glen Alleman - Mon, 07/17/2017 - 17:10

In the There are a few topics in the agile world that are the stalking horses for the agile advocates. One is The Principles of Scientific Management, Frederick Winslow Taylor. Taylorism is tossed around like Water Fall, as the source of all evil in agile development.

It takes the challenge out of those railing against Taylorism would have actually read the book. The book (a paper actually) is 76 pages and describes the 

Let's start with Taylor.  The Introduction says it all...

In the past the man has been first; in the future, the system must be first. This in no sense, however, implies that great men are not needed. On the contrary, the first object of any good system must be that the developing of first-class men; and under systematic management, the best man rises to the top more certainly and more rapidly than ever before.

There are three principles of Taylor's approach to business improvement - written in 1911 - that are applicable today, especially for agile software development (page iv)

1. There is a great loss through inefficiency in almost all our daily acts 
2. The remedy for this inefficiency lies in systematic management, rather than searching for some unusual or extraordinary man. 
3. The best management is a true science, resting upon clearly defined laws, rules, and principles, as a foundation. These fundamental principles are applicable to all kinds of human activities, from our simplest individual acts to the work of great operations, which call upon the most elaborate cooperation.

Remember these are 1911 words. 

Let's look at Chapter 1.

The pronciple object of management should be to secure the maximum prosperity for the employer, coupled with the maximim propersity for each employé. 

The words "maximum prosperity" are used, in their broad sense, to mean noy only large dividends for the company or owner, but the develeopment of every branch of the bsuienss to its highest state of excellence, so that properity may be permanent

The role of business is to make money for investors. These can be employee investors, but at the end of the day, the business needs to make a profit (unless it's a non-profit and then it still needs enough funding to stay in business).

If we are writing software for money, then the goal of the business is to keep writing software for money. 

In the same way maximum prosperity for each employé means not onlyhigher wages than are usuallu eceived by men of his class, but, of more importance still, it also measn the development of each man to his state of maximum effiecincy, so that he may be able to do, generally speaking, the highest grade of work of which his natural abilities fit him, and it futher means giving him, when possible, this class of work to do.

Remember this is 1911 culture and 1911 language.

It would seem to be elf-evident tht maximum prosperity for the employer, couppled with maximum prosperity for the employé, ought to be the two leading objects of management, that even to state this fact should be unnecessary. And yet there is no question that, throughout the industrail world, a large part of the organization of employers, as well as employés, is for war rather than for peace, and that perhaps the majority of either side do not believe that it is possible so to arrange the mutual relations that their interested becomes identical.

This sounds familiar in 2017. But it does not remove the need to stop saying Taylorism is some how evil.

Categories: Project Management

You'll Never Believe the Big Hairy Audacious Startup John Jacob Astor Created in 1808

 

Think your startup has a Big Hairy Audacious Goal? Along with President Thomas Jefferson, John Jacob Astor  conceived (in 1808), and implemented (in 1810) plan to funnel the entire tradable wealth of the westernmost sector of the North American continent north of Mexico through his own hands. Early accounts described it as “the largest commercial enterprise the world has ever known.”

Think your startup raised a lot of money? Astor put up $400,000 ($7,614,486 in today's dollars) of his own money, with more committed after the first prototype succeeded.

Think competition is new? John Jacob Astor dealt with rivals in one of three ways: he tried to buy them out; if that didn’t work, he tried to partner with them; if he failed to join them, he tried to crush them.

Think your startup requires commitment? Joining Astor required pledging five years of one’s life to a start-up venture bound for the unknownn.

Think your startup works hard? Voyageur's paddled twelve to fifteen hours per day, with short breaks while afloat for a pipe of tobacco. During that single day each voyageur would make more than thirty thousand paddle strokes. On the upper Great Lakes, the canoes traversed hundreds of miles of empty, forested shorelines and vast stretches of clear water without ports or settlements or sails, except for the scattered Indian encampment.

Think your product is complex? Astor planned, manned and outfitted one overseas and two overland expeditions to build the equivalent of a Jamestown settlement on the Pacific Coast.

Think your startup parties hard? Every nook and corner in the whole island swarmed, at all hours of the day and night, with motley groups of uproarious tipplers and whisky-hunters. It resembled a great bedlam, the frantic inmates running to and fro in wild forgetfulness. Many were eager for company and with a yen to cut loose—drinking, dancing, singing, whoring, fighting, buying knickknacks and finery from the beach’s shacks and stalls. 

Think your startup was an adventure you can never forget? I have been twenty-four years a canoe man, and forty-one years in service; no portage was ever too long for me. Fifty songs could I sing. I have saved the lives of ten voyageurs. Have had twelve wives and six running dogs. I spent all my money in pleasure. Were I young again, I should spend my life the same way over. There is no life so happy as a voyageur’s life!

Think people at your startup dress weird? Above the waist, the voyageurs wore a loose-fitting and colorful plaid shirt, perhaps a blue or red, and over it, depending on the weather, a long, hooded, capelike coat called a capote. In cold winds they cinched this closed with a waist sash—the gaudier the better, often red. From the striking sash dangled a beaded pouch that contained their fire-making materials and tobacco for their “inevitable pipe.”...The true “Man of the North” wore a brightly colored feather in his cap to distinguish himself from the rabble.

Think your startup takes risks? Half of them died.

And like most startups, they accomplished a lot, but ultimately failed to earn a payout.

Thomas Jefferson said to John Jacob Astor: Your name will be handed down with that of Columbus & Raleigh, as the father of the establishment and the founder of such an empire. Unfortunately, not so much Tom. How many have heard of Astor today? Not many, unless you've traveled to Astoria, Oregon. Astoria in the right weather is a gorgeous place with a hot beer scene.

It's trite to say the reward is in the journey, but in this case the saying is true, the journey was larger than digital life.

For the complete story read: Astoria: John Jacob Astor and Thomas Jefferson's Lost Pacific Empire: A Story of Wealth, Ambition, and Survival.

Categories: Architecture