Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://softdevblogs.com/' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Meet the 20 finalists of the Google Play Indie Games Contest

Android Developers Blog - 7 hours 53 min ago
Posted by Matteo Vallone, Google Play Games Business Development

Back in November, we launched the Google Play Indie Games Contest for developers from 15 European countries, to celebrate the passion and innovation of the indie community in the region. The contest will reward the winners with exposure to industry experts and players worldwide, as well as other prizes that will showcase their art and help them grow their business on Android and Google Play.

Thank you to the nearly 1000 of you who submitted high quality games in all types of genres! Your creativity, enthusiasm and dedication have once again impressed us and inspired us. We had a very fun time testing and judging the games based on fun, innovation, design excellence and technical and production quality, and it was challenging to select only 20 finalists:

Meet the 20 finalists
(In alphabetical order)

Blind Drive
(coming soon)

Lo-Fi People
Israel Causality
(coming soon)

Loju
United Kingdom
Crap! I'm Broke: Out of Pocket
Arcane Circus Netherlands
Egz

Lonely Woof
France
Ellipsis

Salmi GmbH Germany


Gladiabots


GFX47
France
Happy Hop: Kawaii Jump

Platonic Games
Spain
Hidden Folks (coming soon)

Adriaan de Jongh Netherlands Lichtspeer
(coming soon)

Lichthund
Poland Lost in Harmony
Digixart

Entertainment France


Mr Future Ninja (coming soon)

Huijaus Studios
Finland Paper Wings


Fil Games
Turkey PinOut


Mediocre
Sweden
Power Hover


Oddrok
Finland
Reigns

Nerial
United Kingdom
Rusty Lake: Roots

Rusty Lake Netherlands
Samorost 3

Amanita Design Czech Republic
The Battle of Polytopia
Midjiwan AB Sweden


twofold inc.

Grapefrukt games Sweden
Unworded (coming soon)

Bento Studio France

Check out the prizes

All the 20 finalists are getting:
  • The opportunity to exhibit and showcase their game at the final event held at the Saatchi Gallery in London, on 16th February 2017.
  • Promotion of their game on a London billboard for one month.
  • Two tickets to attend a 2017 Playtime event. This is an invitation-only event for top apps and games developers on Google Play.
  • One Pixel XL smartphone.
At the event at Saatchi, the finalists will also have a chance to make it to the next rounds and win additional prizes, including:
  • YouTube influencer campaigns worth up to 100,000 EUR.
  • Premium placements on Google Play.
  • Tickets to Google I/O 2017 and other top industry events.
  • Promotions on our channels.
  • Special prizes for the best Unity game.
  • And more!

Come support them at the final event

At the final event attendees will have a say on which 10 of these finalists will get to pitch their games to the jury, who will decide on the final contest winners who will receive the top prizes.

Register now to join us in London, meet the developers, check out their great games, vote for your favourites, and have fun with various industry experts and indie developers.



A big thank you again to everyone who entered and congratulations to the finalists. We look forward to seeing you at the Saatchi Gallery in London on 16th February.
Categories: Programming

Cone of Uncertainty - Part Trois

Herding Cats - Glen Alleman - Wed, 01/18/2017 - 06:12

The notion of the Cone of Uncertainty has been around for awhile. Barry Boehm's work in “Software Engineering Economics”. Prentice-Hall, 1981.  The poster below is from Steve McConnell's site and makes several things clear.

  • The Cone is a project management framework describing the uncertainty aspects of estimates (cost and schedule) and other project attributes (cost, schedule, and technical performance parameters). Estimates of cost, schedule, technical performance on the left side of the cone have a lower probability of being precise and accurate than estimates on the right side of the cone. This is due to many reasons. One is levels of uncertainty early in the project. Aleatory and Epistemic uncertainties, which create the risk to the success of the project. Other uncertainties that create risk include:
    • Unrealistic performance expectation with missing Measures of Effectiveness and Measures of Performance
    • Inadequate assessment of risks and unmitigated exposure to these risks with proper handling plans.
    • Unanticipated technical issues with alternative plans and solutions to maintain effectiveness
  • Since all project work contains uncertainty, reducing this uncertainty - which reduces risk - is the role of the project team and their management. Either the team itself, the Project or Program Manager, or on larger programs the Risk Management owner. 

Here's a simple definition of the Cone of Uncertainty: 

The Cone of Uncertainty describes the evolution of the amount of uncertainty during a project. Uncertainty not only decreases over time passing, but it also diminishes its impact by risk management, specifically by decision-making.At the beginning of a project, comparatively little is known about the product or work results, and so estimates are subject to large uncertainty. As more research and development is done, more information is learned about the project, and the uncertainty then tends to decrease, reaching 0% when all residual risk has been terminated or transferred. This usually happens by the end of the project.

So the question is? - How much variance reduction needs to take place - in any and all the project attributes (risk, effectiveness, performance, cost, schedule - shown below) at what points in time, to increase the probability of project success.

This is the paradigm of the Cone of Uncertainty - it's a planned development compliance engineering tool, not an after the fact data collection tool

The Cone is NOT the result of the project's past performance. The Cone IS the Planned reduction of uncertainty as the project proceeds. When actual measures of cost, schedule, and technical performance are outside the planned cone of uncertainty, corrective actions must be taken to move those uncertanties inside the cone of uncertanty, if the project is going to meet it's cost, schedule, and technical performance goals. 

The Measure that is modeled in the Cone the variable is the Quantitative basis of a control process that establishes the goal for the performance measure. Capturing the actual performance, comparing it to the planned performance, and compliance with the upper and lower control limits provides guidance for making adjustments to maintain that variables performance with acceptable limits.

The Benefits of the Use of the Cone of Uncertainty 

The planned value, the upper and lower control limits, the measures of actual values form a Close Loop Control System - a measurement based feedback process to improve the effectiveness and efficiency of the project management processes by [1]

  • Analyzing trends that help focus on problem areas at the earliest point in time - when the variable under control starts misbehaving, intervention can be taken. No need to wait till the end to find out you're not going to make it
  • Providing early insight into error-prone products that can then be corrected earlier and thereby at lower cost - when the trends are headed to the UCL and LCL, intervention can take place.
  • Avoiding or minimizing cost overruns and schedule slips by detecting them early - by observing trends to breaches of the UCL and LCL.
    enough in the project to implement corrective actions
  • Performing better technical planning, and making adjustments to resources based on discrepancies between planned and actual progress

 

Screen Shot 2017-01-12 at 3.48.34 PM

A critical success factor for all project work is Risk Management. And risk management includes the managements of all kinds of risks. Risks from all kinds of sources of uncertainty, including technical risk, cost risk, schedule, management risk. Each of these uncertainties and the risks they produce can take on a range of values described by probability and statistical distribution functions. Knowing what ranges are possible and knowing what ranges are acceptable is a critical project success factor.

We need to know the Upper Control Limits (UCL) and Lower Control Limit (LCL) of the ranges of all the variables that will impact the success of our project. With this paradigm we have logically connected project management processes with Control System processesIf the variances, created by uncertainty going outside the UCL and LCL. Here's a work in progress paper "Is there an underlying Theory of Project Management," that addresses some of the issues with control of project activities.

Here are some examples of Planned variances and managing of the actual variances to make sure the project stays on plan.

A product weight as a function of the programs increasing maturity. In this case, the projected base weight is planned and the planned weights of each of the major subsystems are laid out as a function of time. Tolerance bands for the project base weight provide management with actionable information about the progression of the program. If the vehicle gets overweight, money and time are needed to correct the undesirable variance. This is a closed loop control system for managing the program with a Technical Performance Measure (TPM). There can be cost and schedule performance measures as well.

Screen Shot 2017-01-13 at 4.23.56 PM

Below is another example of a Weight reduction attribute that has error bands. In this example (an actual vehicle like the example above) the weight must be reduced as the program proceeds left to right. We have a target weight at Test Readiness Review of 23KG. A 25KG vehicle was sold in the proposal, and we need a target weight that has a safety margin, so 23KG is our target.

As the program proceeds, there are UCL and LCL bands that follow the planned weight.  The Orange dots are the actual weights from a variety of sources - a Design Model (3D Catia CAD system), a detailed design model, a bench scale model that can be measured, a non-flying prototype, and then the 1st Flight Article). As the program progresses each of the weight measurements for each of the models through to a final article is compared to the planned weight. We need to keep these values inside the error bands of NEEDED weight reduction if we are to stay on plan.

This is the critical concept in successful project management

We must have a Plan for the critical attributes - Mission Effectiveness, Technical Performance, Key Performance Parameters - for the items. If these are not compliant with the plan will one of the Root Causes of program performance shortfall. We must have a burndown or burnup plan for producing the end item deliverables for the program that match those parameters over the course of the program. Of course, we have a wide range of possible outcomes for each item in the beginning. And as the program proceeds the variances measures on those items move toward compliance of the target number in this case Weight.

Screen Shot 2017-01-13 at 4.21.56 PM

Here's another example of the Cone of Uncertainty, in this case, the uncertainty is the temperature of an oven being designed by an engineering team. The UCL and LCL are defined BEFORE the project starts. These are used to inform the designer of the progress of the project as it proceeds. Staying inside the controls limits is the Planned progress path to the final goal - in this case, temperature.

The Cone of Uncertanty, is the signaling boundaries of the Closed Loop Control system used to manage the project to success

Screen Shot 2017-01-13 at 4.38.04 PM

It turns out the cone can also be a flat range with Upper and Lower Control Limits of the variable that is being developed - a design to variable - in this example a Measure of Performance. In this case, a Measure of Performance that needs to stay within the Upper and Lower limits as the project progress through its gates. If this variable is out of bounds the project will have to pay in some way to get it back to Green

A Measure of Performance characterizes physical or functional attributes relating to the system operation, measured or estimated under specific conditions. Measures of Performance are (1) Attributes that assure the system has the capability and capacity to perform and (2) Assessment of the system to assure it meets design requirements to satisfy the Measures of Effectiveness.

Screen Shot 2017-01-15 at 7.37.49 PM

Another cone style is the cone of confidence in a delivery date. This Actual case it's an Actual Launch date. In this case, as the program moves from left to right, we need to assure that the Launch Date moves from a low confidence Date to a date that has a chance of being correct. The BLUE bars are the probabilistic ranges of the current estimate date. As the program moves forward those ranges must be reduced if we're going to show up as needed. The Planned date and a date with a margin are the build to dates. As the program moves the confidence of the date must increase and move toward the need date.

  • The probabilistic completion times change as the program matures
  • The efforts that produce these improvements must be defined and managed
  • The error bands of the assessment points must include the risk mitigation activities as well
  • The planned activities show how the error band narrows over time
    • This is the basis of a risk tolerant plan
    • The probabilistic interval become more reliable as the risk mitigation and the maturity assessment add confidence to the planned launch date

Just a reminder again - the Cone of Uncertainty is a DESIRED path, NOT the result of an unmanaged project outcome.

Risk Management, as shown below, is how Adults Manage Projects

Screen Shot 2017-01-13 at 7.09.21 AM

Wrap Up On the Misunderstanding of the Purpose and Value of the Cone of Uncertainty

When you hear... 

I have data that shows that uncertainty (or any other needed attribute) doesn't reduce and therefore the COU is a FAKE ... OR ... I see data on my projects where the variance is getting worse as we move forward, instead of narrowing as the Planned COU tells us it should be to meet our goals ...

...then that project is out of control,  starting with a missing steering target that means it's Open Loop Control and will be late, over budget, and likely not perform to the needed effectiveness and performance parameters. And when you see these out of control situations, go find the Root Cause and generate the Corrective Act. 

This data is an observation of a project not being managed as Tim Lister suggests - Risk Management is How Adults Manage Projects. 

And if these observations are taking place without corrective actions of the Root Causes of the performance shortfall, the management is behaving badly. Their just observers of the train wreck that is going to happen real soon.

The Engineering Reason for the Cone of Uncertainty Model and the Value it Provides the Designing Makers

The Cone of Uncertainty is NOT an output from the project's behaviour, by then that's too late.
It's a Steering Target Input to the Management Framework for increasing the probability of the project's success.
This is the Programmatic Management of the project in support of the Technical Management of the project. The processes is an engineering discipline. Systems Engineering, Risk Engineering, Safety and Mission Assurance Engineering, are typical roles where we work.
To suggest otherwise is to invert the paradigm and removes any value from the post-facto observations of the project's performance. At that point it's Too Late, the Horse has left and there's no getting him back.
Defining the planned and needed variance levels at planned points in the project is the basis of the closed loop control system needed increase the probability of success.
When variances outside the planned variance appear, the Root Cause of those must be found and corrective action take.

Resources

[1] Systems Engineering Measurement Primer, INCOSE

[2] System Analysis, Design, and Development Concepts, Principles, and Practices, Charles Wasson, John Wiley & Sons

[3] SMC Systems Engineering Primer & Handbook: Concept, Processes, and Techniques, Space & Missle Systems Center, U.S. Air Force

[4] Defense Acquisition Guide, Chapter 4, Systems Engineering, 15 May 2013.

[5] Program Managers Tool Kit, 16th Edition, Defense Acquisition University.

[6] "Open Loop / Close Loop Project Controls"

[7] "Reducing Estimation Uncertainty with Continuous Assessment: Tracking the 'Cone of Uncertainty'," Pongtip Aroonvatanaporn, Chatchai Sinthop, Barry Boehm. ASE’10, September 20–24, 2010, Antwerp, Belgium. 

[8] Boehm, B. “Software Engineering Economics”. Prentice-Hall, 1981.

[9] Boehm, B., Abts, C., Brown, A. W., Chulani, S., Clark, B. K., Horowitz, E., Madachy, R., Reifer, D. J., and Steece, B. Software Cost Estimation with COCOMO II, Prentice-Hall,
2000.
[10] Boehm, B., Egyed, A., Port, D., Shah, A., Kwan, J., and Madachy, R. "Using the WinWin Spiral Model: A Case Study," IEEE Computer, Volume 31, Number 7, July 1998, pp.  33-44 

[11] Cohn, M. Agile Estimating and Planning, Prentice-Hall, 2005

[12] DeMarco, T. Controlling Software Projects: Management, Measurement, and Estimation, Yourdon Press, 1982.

[13] Fleming, Q. W. and Koppelman, J. M. Earned Value Project Management, 2nd edition, Project Management Institute, 2000

[14] Galorath, D. and Evans, M. Software Sizing, Estimation, and Risk Management, Auer-bach, 2006

[15]Jorgensen, M. and Boehm, B. “Software Development Effort Estimation: Formal Models or Expert Judgment?” IEEE Software, March-April 2009, pp. 14-19

[16] Jorgensen, M. and Shepperd, M. “A Systematic Review of Software Development Cost Estimation Studies,” IEEE Trans. Software Eng., vol. 33, no. 1, 2007, pp. 33-53

[17] Krebs, W., Kroll, P., and Richard, E. Un-assessments –reflections by the team, for the team. Agile 2008 Conference

[18] McConnell, S. Software Project Survival Guide, Microsoft Press, 1998

[19] Nguyen, V., Deeds-Rubin, S., Tan, T., and Boehm, B. "A SLOC Counting Standard," COCOMO II Forum 2007

[20] Putnam L. and Fitzsimmons, A. “Estimating Software Costs, Parts 1,2 and 3,” Datamation, September through December 1979

[21] Stutzke, R. D. Estimating Software-Intensive Systems, Pearson Education, Inc, 2005. 

Related articles

Complex, Complexity, Complicated Economics of Software Development Herding Cats: Economics of Software Development Estimating Probabilistic Outcomes? Of Course We Can! I Think You'll Find It's a Bit More Complicated Than That Risk Management is How Adults Manage Projects

 

Categories: Project Management

Emotional Intelligence, Useful?

Lots of screens!

So many things to learn and so little time to do it!

I was asked why emotional intelligence was important and whether emotional intelligence can be learned.

With a little probing on the second part of the question, it was suggested that there was a school of thought that emotional intelligence is an inherent human attribute; you have it or you don’t. The “either you are or aren’t” argument is similar those with a fixed mindset make about most capabilities.  The concept of a fixed in the book Mindset written by Carol Dweck. In the book, Dweck argues that mindsets are not fixed and we have identified several attributes that comprise emotional intelligence that can be improved in the essay, A Few Steps To Improving Your Emotional Intelligence. I do not accept that emotional intelligence is a fixed human ability. Simply put, your emotional intelligence quotient is not fixed and can be learned.

The second question suggests that emotional intelligence, while interesting, is not useful in the day-to-day operation of an Agile team (or by extension an organization). Fortunately, you do not have to look far to find applications of emotional intelligence in many day-to-day scenarios, ranging from team meetings to sales. As a leader or coach, it is easy to identify scenarios when emotional intelligence can be useful.  Three examples are:   

Diffusing Problem Situations

Problems happen and most involve people.  The large problems, for example, an irate client or someone that is acting counter to the team needs, are often easily spotted once they have happened. However, they are often the result of an accumulation of little issues; minor abrasions can add up. Examples of minor issues might include an occasional bit of underperformance, a bad mood, some failing to wish you happy birthday, talking over you on occasion.  The list can go on. Emotional intelligence helps not only to recognize the problem but more importantly when combined with listening to those within the boundary of the problem helps everyone to unburden.  Understanding and listening are input into empathy which is needed to come up with a fitting solution. Emotional intelligence is a tool to defuse problem situations.

Curiosity

Two of the five competencies of emotional intelligence are awareness and self-awareness of emotions in yourself and others and secondly, the ability to construct relationships. These two competencies are typically reflected as a curiosity. In my re-read of Carol Dweck’s Mindset, one of the attributes of the growth mindset is insatiable need to learn and experience challenges, a related form of curiosity. Emotional intelligence is linked to the growth mindset through curiosity (at the very least).  Curiosity and the desire to learn are important capabilities in stable cross-functional teams.  Agile teams with emotionally intelligent members will be able to stretch to meet their customer needs because leveraging their curiosity they can identify what the skill they need to know, then learn new skills and discover new solutions. Curiosity may have killed the cat, but emotional intelligence fosters curiosity and learning will make the Agile team.

Repeat Clients

I have many friends that are fellow consultants, both independent and part of a larger organization.  All of them are intelligent, all of them have great pedigrees and many of them are successful as independents. When consultants gather, the one conversation all consultants have is getting and keeping clients.  I have observed that the consultants that have repeating/recurring clients have significantly more emotional intelligence than those that are great at getting clients but less so at generating repeat business.  Emotional intelligence is a tool to build meaningful relationships that make it easier get repeat business.  The essay, Emotional Intelligence: A Few Basics referenced Daniel Kahneman’s statement  ”that people would rather do business with a person they like and trust rather than someone they don’t.”  Emotional intelligence makes it easier to build solid relationships that translate into repeat clients. Without emotional intelligence, it is difficult to generate the empathy needed to invest time into growing relationships based on anything other than sales volume.

Emotional intelligence is useful for identifying and defusing problems, generating relationships and to facilitate repeat sales. You are not born with all of the emotional intelligence that you can or will ever need.  Emotional intelligence is a reflection of a set of capabilities that can be improved.  We should invest the time, effort and money needed to get increase our emotional intelligence capability.    


Categories: Process Management

Emotional Intelligence, Useful?

Lots of screens!

So many things to learn and so little time to do it!

I was asked why emotional intelligence was important and whether emotional intelligence can be learned.

With a little probing on the second part of the question, it was suggested that there was a school of thought that emotional intelligence is an inherent human attribute; you have it or you don’t. The “either you are or aren’t” argument is similar those with a fixed mindset make about most capabilities.  The concept of a fixed in the book Mindset written by Carol Dweck. In the book, Dweck argues that mindsets are not fixed and we have identified several attributes that comprise emotional intelligence that can be improved in the essay, A Few Steps To Improving Your Emotional Intelligence. I do not accept that emotional intelligence is a fixed human ability. Simply put, your emotional intelligence quotient is not fixed and can be learned.

The second question suggests that emotional intelligence, while interesting, is not useful in the day-to-day operation of an Agile team (or by extension an organization). Fortunately, you do not have to look far to find applications of emotional intelligence in many day-to-day scenarios, ranging from team meetings to sales. As a leader or coach, it is easy to identify scenarios when emotional intelligence can be useful.  Three examples are:   

Diffusing Problem Situations

Problems happen and most involve people.  The large problems, for example, an irate client or someone that is acting counter to the team needs, are often easily spotted once they have happened. However, they are often the result of an accumulation of little issues; minor abrasions can add up. Examples of minor issues might include an occasional bit of underperformance, a bad mood, some failing to wish you happy birthday, talking over you on occasion.  The list can go on. Emotional intelligence helps not only to recognize the problem but more importantly when combined with listening to those within the boundary of the problem helps everyone to unburden.  Understanding and listening are input into empathy which is needed to come up with a fitting solution. Emotional intelligence is a tool to defuse problem situations.

Curiosity

Two of the five competencies of emotional intelligence are awareness and self-awareness of emotions in yourself and others and secondly, the ability to construct relationships. These two competencies are typically reflected as a curiosity. In my re-read of Carol Dweck’s Mindset, one of the attributes of the growth mindset is insatiable need to learn and experience challenges, a related form of curiosity. Emotional intelligence is linked to the growth mindset through curiosity (at the very least).  Curiosity and the desire to learn are important capabilities in stable cross-functional teams.  Agile teams with emotionally intelligent members will be able to stretch to meet their customer needs because leveraging their curiosity they can identify what the skill they need to know, then learn new skills and discover new solutions. Curiosity may have killed the cat, but emotional intelligence fosters curiosity and learning will make the Agile team.

Repeat Clients

I have many friends that are fellow consultants, both independent and part of a larger organization.  All of them are intelligent, all of them have great pedigrees and many of them are successful as independents. When consultants gather, the one conversation all consultants have is getting and keeping clients.  I have observed that the consultants that have repeating/recurring clients have significantly more emotional intelligence than those that are great at getting clients but less so at generating repeat business.  Emotional intelligence is a tool to build meaningful relationships that make it easier get repeat business.  The essay, Emotional Intelligence: A Few Basics referenced Daniel Kahneman’s statement  ”that people would rather do business with a person they like and trust rather than someone they don’t.”  Emotional intelligence makes it easier to build solid relationships that translate into repeat clients. Without emotional intelligence, it is difficult to generate the empathy needed to invest time into growing relationships based on anything other than sales volume.

Emotional intelligence is useful for identifying and defusing problems, generating relationships and to facilitate repeat sales. You are not born with all of the emotional intelligence that you can or will ever need.  Emotional intelligence is a reflection of a set of capabilities that can be improved.  We should invest the time, effort and money needed to get increase our emotional intelligence capability.    


Categories: Process Management

Silence speaks louder than words when finding malware

Google Code Blog - Tue, 01/17/2017 - 23:06
Originally posted on Android Developer Blog

Posted by Megan Ruthven, Software Engineer
In Android Security, we're constantly working to better understand how to make Android devices operate more smoothly and securely. One security solution included on all devices with Google Play is Verify apps. Verify apps checks if there are Potentially Harmful Apps (PHAs) on your device. If a PHA is found, Verify apps warns the user and enables them to uninstall the app.

But, sometimes devices stop checking up with Verify apps. This may happen for a non-security related reason, like buying a new phone, or, it could mean something more concerning is going on. When a device stops checking up with Verify apps, it is considered Dead or Insecure (DOI). An app with a high enough percentage of DOI devices downloading it, is considered a DOI app. We use the DOI metric, along with the other security systems to help determine if an app is a PHA to protect Android users. Additionally, when we discover vulnerabilities, we patch Android devices with our security update system. This blog post explores the Android Security team's research to identify the security-related reasons that devices stop working and prevent it from happening in the future.
Flagging DOI Apps
To understand this problem more deeply, the Android Security team correlates app install attempts and DOI devices to find apps that harm the device in order to protect our users.
With these factors in mind, we then focus on 'retention'. A device is considered retained if it continues to perform periodic Verify apps security check ups after an app download. If it doesn't, it's considered potentially dead or insecure (DOI). An app's retention rate is the percentage of all retained devices that downloaded the app in one day. Because retention is a strong indicator of device health, we work to maximize the ecosystem's retention rate. Therefore, we use an app DOI scorer, which assumes that all apps should have a similar device retention rate. If an app's retention rate is a couple of standard deviations lower than average, the DOI scorer flags it. A common way to calculate the number of standard deviations from the average is called a Z-score. The equation for the Z-score is below.
  • N = Number of devices that downloaded the app.
  • x = Number of retained devices that downloaded the app.
  • p = Probability of a device downloading any app will be retained.

In this context, we call the Z-score of an app's retention rate a DOI score. The DOI score indicates an app has a statistically significant lower retention rate if the Z-score is much less than -3.7. This means that if the null hypothesis is true, there is much less than a 0.01% chance the magnitude of the Z-score being as high. In this case, the null hypothesis means the app accidentally correlated with lower retention rate independent of what the app does.
This allows for percolation of extreme apps (with low retention rate and high number of downloads) to the top of the DOI list. From there, we combine the DOI score with other information to determine whether to classify the app as a PHA. We then use Verify apps to remove existing installs of the app and prevent future installs of the app.
Difference between a regular and DOI app download on the same device.
Results in the wild
Among others, the DOI score flagged many apps in three well known malware families— Hummingbad, Ghost Push, and Gooligan. Although they behave differently, the DOI scorer flagged over 25,000 apps in these three families of malware because they can degrade the Android experience to such an extent that a non-negligible amount of users factory reset or abandon their devices. This approach provides us with another perspective to discover PHAs and block them before they gain popularity. Without the DOI scorer, many of these apps would have escaped the extra scrutiny of a manual review.
The DOI scorer and all of Android's anti-malware work is one of multiple layers protecting users and developers on Android. For an overview of Android's security and transparency efforts, check out our page.
Categories: Programming

Silence speaks louder than words when finding malware

Android Developers Blog - Tue, 01/17/2017 - 22:59
Posted by Megan Ruthven, Software Engineer
In Android Security, we're constantly working to better understand how to make Android devices operate more smoothly and securely. One security solution included on all devices with Google Play is Verify apps. Verify apps checks if there are Potentially Harmful Apps (PHAs) on your device. If a PHA is found, Verify apps warns the user and enables them to uninstall the app.
But, sometimes devices stop checking up with Verify apps. This may happen for a non-security related reason, like buying a new phone, or, it could mean something more concerning is going on. When a device stops checking up with Verify apps, it is considered Dead or Insecure (DOI). An app with a high enough percentage of DOI devices downloading it, is considered a DOI app. We use the DOI metric, along with the other security systems to help determine if an app is a PHA to protect Android users. Additionally, when we discover vulnerabilities, we patch Android devices with our security update system.

This blog post explores the Android Security team's research to identify the security-related reasons that devices stop working and prevent it from happening in the future.
Flagging DOI Apps

To understand this problem more deeply, the Android Security team correlates app install attempts and DOI devices to find apps that harm the device in order to protect our users.
With these factors in mind, we then focus on 'retention'. A device is considered retained if it continues to perform periodic Verify apps security check ups after an app download. If it doesn't, it's considered potentially dead or insecure (DOI). An app's retention rate is the percentage of all retained devices that downloaded the app in one day. Because retention is a strong indicator of device health, we work to maximize the ecosystem's retention rate.

Therefore, we use an app DOI scorer, which assumes that all apps should have a similar device retention rate. If an app's retention rate is a couple of standard deviations lower than average, the DOI scorer flags it. A common way to calculate the number of standard deviations from the average is called a Z-score. The equation for the Z-score is below.

  • N = Number of devices that downloaded the app.
  • x = Number of retained devices that downloaded the app.
  • p = Probability of a device downloading any app will be retained.

In this context, we call the Z-score of an app's retention rate a DOI score. The DOI score indicates an app has a statistically significant lower retention rate if the Z-score is much less than -3.7. This means that if the null hypothesis is true, there is much less than a 0.01% chance the magnitude of the Z-score being as high. In this case, the null hypothesis means the app accidentally correlated with lower retention rate independent of what the app does.
This allows for percolation of extreme apps (with low retention rate and high number of downloads) to the top of the DOI list. From there, we combine the DOI score with other information to determine whether to classify the app as a PHA. We then use Verify apps to remove existing installs of the app and prevent future installs of the app.

Difference between a regular and DOI app download on the same device.


Results in the wild
Among others, the DOI score flagged many apps in three well known malware families— Hummingbad, Ghost Push, and Gooligan. Although they behave differently, the DOI scorer flagged over 25,000 apps in these three families of malware because they can degrade the Android experience to such an extent that a non-negligible amount of users factory reset or abandon their devices. This approach provides us with another perspective to discover PHAs and block them before they gain popularity. Without the DOI scorer, many of these apps would have escaped the extra scrutiny of a manual review.
The DOI scorer and all of Android's anti-malware work is one of multiple layers protecting users and developers on Android. For an overview of Android's security and transparency efforts, check out our page.


Categories: Programming

Sponsored Post: Contentful, Stream, Loupe, New York Times, Scalyr, VividCortex, MemSQL, InMemory.Net, Zohocorp

Who's Hiring?
  • Contentful is looking for a JavaScript BackEnd Engineer to join our team in their mission of getting new users - professional developers - started on our platform within the shortest time possible. We are a fun and diverse family of over 100 people from 35 nations with offices in Berlin and San Francisco, backed by top VCs (Benchmark, Trinity, Balderton, Point Nine), growing at an amazing pace. We are working on a content management developer platform that enables web and mobile developers to manage, integrate, and deliver digital content to any kind of device or service that can connect to an API. See job description.

  • The New York Times is looking for a Software Engineer for its Delivery/Site Reliability Engineering team. You will also be a part of a team responsible for building the tools that ensure that the various systems at The New York Times continue to operate in a reliable and efficient manner. Some of the tech we use: Go, Ruby, Bash, AWS, GCP, Terraform, Packer, Docker, Kubernetes, Vault, Consul, Jenkins, Drone. Please send resumes to: technicaljobs@nytimes.com
Fun and Informative Events
  • Your event here!
Cool Products and Services
  • Build, scale and personalize your news feeds and activity streams with getstream.io. Try the API now in this 5 minute interactive tutorial. Stream is free up to 3 million feed updates so it's easy to get started. Client libraries are available for Node, Ruby, Python, PHP, Go, Java and .NET. Stream is currently also hiring Devops and Python/Go developers in Amsterdam. More than 400 companies rely on Stream for their production feed infrastructure, this includes apps with 30 million users. With your help we'd like to ad a few zeros to that number. Check out the job opening on AngelList.

  • A note for .NET developers: You know the pain of troubleshooting errors with limited time, limited information, and limited tools. Log management, exception tracking, and monitoring solutions can help, but many of them treat the .NET platform as an afterthought. You should learn about Loupe...Loupe is a .NET logging and monitoring solution made for the .NET platform from day one. It helps you find and fix problems fast by tracking performance metrics, capturing errors in your .NET software, identifying which errors are causing the greatest impact, and pinpointing root causes. Learn more and try it free today.

  • Scalyr is a lightning-fast log management and operational data platform.  It's a tool (actually, multiple tools) that your entire team will love.  Get visibility into your production issues without juggling multiple tabs and different services -- all of your logs, server metrics and alerts are in your browser and at your fingertips. .  Loved and used by teams at Codecademy, ReturnPath, Grab, and InsideSales. Learn more today or see why Scalyr is a great alternative to Splunk.

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex is a SaaS database monitoring product that provides the best way for organizations to improve their database performance, efficiency, and uptime. Currently supporting MySQL, PostgreSQL, Redis, MongoDB, and Amazon Aurora database types, it's a secure, cloud-hosted platform that eliminates businesses' most critical visibility gap. VividCortex uses patented algorithms to analyze and surface relevant insights, so users can proactively fix future performance problems before they impact customers.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network. 

If any of these items interest you there's a full description of each sponsor below...

Categories: Architecture

Wouldn't it be nice if everyone knew a little queuing theory?

After many days of rain one lane of this two lane road collapsed into the canyon. It's been out for a month and it will be many more months before it will be fixed. Thanks to Google maps way too many drivers take this once sleepy local road. 

How do you think drivers go through this chokepoint? 

 

 

One hundred experience points to you if you answered one at a time.

One at a time! Through a half-duplex pipe following a first in first out discipline takes forever!

Yes, there is a stop sign. And people default to this mode because it appeals to our innate sense of fairness. What could be fairer than alternating one at a time?

The problem is it's stupid.

While waiting, stewing, growing angrier, I often think if people just knew a little queueing theory we could all be on our way a lot faster.

We can't make the pipe full duplex, so that's out. Let's assume there's no priority involved, vehicles are roughly the same size and take roughly the same time to transit the network. Then what do you do?

Why can't people figure out its faster to drive through in batches? If we went in groups of say, three, the throughput would be much higher. And when one side's queue depth grows larger because people are driving to or from work that side's batch size should increase. 

Since this condition will last a long time we have a possibility to learn because the same people take this road all the time. So what happens if you try to change the culture by showing people what a batch is by driving right behind someone as they take their turn?

You got it. Honking. There's a simple heuristic, a deeply held ethic against line cutting, so people honk, flip you off, and generally make heir displeasure known.

It's your classic battle of reason versus norms. The smart thing is the thing we can't do by our very natures. So we all just keep doing the dumb thing.

Categories: Architecture

My Most Popular Posts of 2016

Mike Cohn's Blog - Tue, 01/17/2017 - 16:00

Because I wrote a lot last year--25 blog posts and 50 weekly email tips--I wanted to start something new this year. So here’s a list of the most popular blog posts here during 2016. I hope it helps you catch up on any you missed during the year.

Using our own little algorithm that is a combination of page views, comments and time spent on the pages, here are my top 10 blog posts from 2016, counting down from number 10:

10) Applying Agile Beyond Software Development

Agile can be applied well beyond software development. It’s been used for construction, planning weddings, marketing and more. These are my thoughts on how agile could have saved a hotel chain from an expensive mistake.

9) What Are Story Points?

Story points are perhaps the most misunderstood topic in agile. Story points are not based on just one factor--such as complexity, as is often mistakenly claimed. Instead, story points are based on a combination of factors.

8) Advice on How to Split Reporting User Stories

Splitting stories has long been one of the biggest challenges facing agile teams. Here are some examples of splitting some reporting stories to demonstrate ways of splitting stories.

7) Does a Scrum Team Need a Retrospective Every Sprint?

Conventional wisdom says that a team should do a retrospective every sprint. But if your sprints are one week, can you do them every few sprints? That would still be more often than a team doing four-week sprints.

6) How to Prevent Estimate Inflation

A rose by any other name may smell as sweet, but a five-point story better not go by any other names. Or numbers. Here’s how to maintain consistency across estimates.

5) Summarizing the Results of a Sprint

Although you may wish it weren’t the case, some Scrum Masters need to document how a sprint went. Here’s advice on how to do that in a lightweight, agile manner.

4) The Dangers of a Definition of Ready

After seeing the value of a Definition of Done, some teams introduce a Definition of Ready. For many teams, this is a big mistake and a first step towards a waterfall process.

3) Don’t Estimate the Sprint Backlog Using Task Points

Some teams like story points so much, they invent task points and use those for sprint planning. Bad idea. Here’s why.

2) Sprint Planning for Agile Teams That Have Lots of Interruptions

Most of the Scrum literature describes a situation in which a team is allowed to work without interruption. But that’s not realistic. Here’s how an interrupt-driven team can plan its sprints.

1) A Simple Way to Run a Sprint Retrospective

There are many ways you can run a sprint retrospective. Here’s the simplest way and still my favorite.

What Do You Think?

Please let me know what you think. Is this list missing any of your favorites?

An Inferno on the Head of a Pin

Coding Horror - Jeff Atwood - Tue, 01/17/2017 - 12:37

Today's processors contain billions of heat-generating transistors in an ever shrinking space. The power budget might go from:

  • 1000 watts on a specialized server
  • 100 watts on desktops
  • 30 watts on laptops
  • 5 watts on tablets
  • 1 or 2 watts on a phone
  • 100 milliwatts on an embedded system

That's three four orders of magnitude. Modern CPU design is the delicate art of placing an inferno on the head of a pin.

Look at the original 1993 Pentium compared to the 20th anniversary Pentium:

Intel Pentium 66 1993
Pentium
66 Mhz
16kb L1
3.2 million transistors
Intel Pentium G3258 20th Anniversary Edition 2014
Pentium G3258
3.2 Ghz
2 core / 4 thread
128kb L1, 512kb L2, 3MB L3
1.4 billion transistors

I remember cooling the early CPUs with simple heatsinks; no fan. Those days are long gone.

A roomy desktop computer affords cooling opportunities (and thus a watt budget) that a laptop or tablet could only dream of. How often will you be at peak load? For most computers, the answer is "rarely". The smaller the space, the higher the required performance, the more … challenging your situation gets.

Sometimes, I build servers.

Inspired by Google and their use of cheap, commodity x86 hardware to scale on top of the open source Linux OS, I also built our own servers. When I get stressed out, when I feel the world weighing heavy on my shoulders and I don't know where to turn … I build servers. It's therapeutic.

Servers are one of those situations where you may be at full CPU load more often than not. I prefer to build 1U servers which is the smallest rack mountable unit, at 1.75" total height.

You get plenty of cores on a die these days, so I build single CPU servers. One reason is price; the other reason is that clock speed declines proportionally to the number of cores on a die (this is for the Broadwell Xeon V4 series):

coresGHz E5-163043.7$406 E5-165063.6$617 E5-168083.4$1723 E5-2680122.4$1745 E5-2690142.6$2090 E5-2697182.3$2702

Yes, there are server CPUs with even more cores, but if you have to ask how much they cost, you definitely can't afford them … and they're clocked even slower. What we do is serviced better by a smaller number of super fast cores than a larger number of slow cores, anyway.

With that in mind, consider these two Intel Xeon server CPUs:

As you can see from the official Intel product pages for each processor, they both have a TDP heat budget of 140 watts. I'm scanning the specs, thinking maybe this is an OK tradeoff.

Unfortunately, here's what I actually measured with my trusty Kill-a-Watt for each server build as I performed my standard stability testing, with completely identical parts except for the CPU:

  • E5-1630: 40w idle, 170w mprime
  • E5-1650: 55w idle, 250w mprime

I am here to tell you that Intel's TDP figure of 140 watts for the 6 core version of this CPU is a terrible, scurrilous lie!

This caused a bit of a problem for me as our standard 1U server build now overheats, alarms, and throttles with the 6 core CPU — whereas the 4 core CPU was just fine. Hey Intel! From my home in California, I stab at thee!

But, you know..

Better Heatsink

The 1.75" maximum height of the 1U server form factor doesn't leave a lot of room for creative cooling of a CPU. But you can switch from an Aluminum cooler to a Copper one.

Copper is significantly more expensive, plus heavier and harder to work with, so it's generally easier to throw an ever-larger mass of aluminum at the cooling problem when you can. But when space is a constraint, as it is with a 1U server, copper dissipates more heat in the same form factor.

The famous "Ninja" CPU cooler came in identical copper and aluminum versions so we can compare apples to apples:

  • Aluminum Ninja — 24C rise over ambient
  • Copper Ninja — 17C rise over ambient

You can scale the load and the resulting watts of heat by spinning up MPrime threads for the exact number of cores you want to "activate", so that's how I tested:

  • Aluminum heatsink — stable at 170w (mprime threads=4), but heat warnings with 190w (mprime threads=5)
  • Copper heatsink — stable at 190w (mprime threads=5) but heat warnings with 230w (mprime threads=6)

Each run has to be overnight to be considered successful. This helped, noticeably. But we need more.

Better Thermal Interface

When it comes to server builds, I stick with the pre-applied grey thermal interface pad that comes on the heatsinks. But out of boredom and a desire to experiment, I …

  • Removed the copper heatsink.
  • Used isopropyl alcohol to clean both CPU and heatsink.
  • Applied fancy "Ceramique" thermal compound I have on hand, using an X shape pattern.

I wasn't expecting any change at all, but to my surprise with the new TIM applied it took 5x longer to reach throttle temps with mprime threads=6. Before, it would thermally throttle within a minute of launching the test, and after it took ~10 minutes to reach that same throttle temp. The difference was noticeable.

That's a surprisingly good outcome, and it tells us the default grey goop that comes pre-installed on heatsinks is ... not great. Per this 2011 test, the difference between worst and best thermal compounds is 4.3°C.

But as Dan once bravely noted while testing Vegemite as a thermal interface material:

If your PC's so marginal that a CPU running three or four degrees Celsius warmer will crash it [or, for modern CPUs, cause the processor to auto-throttle itself and substantially reduce system performance], the solution is not to try to edge away from the precipice with better thermal compound. It's to make a big change to the cooling system, or just lower the darn clock speed.

An improved thermal interface just gets you there faster (or slower); it doesn't address the underlying problem. So we're not done here.

Ducted Airflow

Most, but not all, of the SuperMicro cases I've used have included a basic fan duct / shroud that lays across the central fans and the system. Given that the case fans are pretty much directly in front of the CPU anyway, I've included the shroud in the builds out of a sense of completeness more than any conviction that it was doing anything for the cooling performance.

This particular server case, though, did not include a fan duct. I didn't think much about it at the time, but considering the heat stress this 6-core CPU and its 250 watt heat generation was putting on our 1U build, I decided I should build a quick duct out of card stock and test it out.

(I know, I know, it's a super janky duct! But I was prototyping!)

Sure enough, this duct, combined with the previous heatsink and TIM changes, enabled the server to remain stable overnight with a full MPrime run of 12 threads.

I think we've certainly demonstrated the surprising (to me, at least) value of fan shrouds. But before we get too excited, let's consider one last thing.

Define "CPU Load"

Sometimes you get so involved with solving the problem at hand that you forget to consider whether you are, in fact, solving the right problem.

In these tests, we defined 100% CPU load using MPrime. Some people claim MPrime is more of a power virus than a real load test, because it exerts so much heat pressure on the CPUs. I initially dismissed these claims since I've used MPrime (and its Windows cousin, Prime95) for almost 20 years to test CPU stability, and it's never let me down.

But I did more research and I found that MPrime, since 2011, uses AVX2 instructions extensively on newer Intel CPUs:

The newer versions of Prime load in a way that they are only safe to run at near stock settings. The server processors actually downclock when AVX2 is detected to retain their TDP rating. On the desktop we're free to play and the thing most people don't know is how much current these routines can generate. It can be lethal for a CPU to see that level of current for prolonged periods.

That's why most stress test programs alternate between different data pattern types. Depending on how effective the rotation is, and how well that pattern causes issues for the system timing margin, it will, or will not, catch potential for instability. So it's wise not to hang one's hat on a single test type.

This explains why I saw such a large discrepancy between other CPU load programs like BurnP6 and MPrime.

MPrime does an amazing job of generating the type of CPU load that causes maximum heat pressure. But unless your servers regularly chew through zillions of especially power-hungry AVX2 instructions this may be completely unrepresentative of any real world load your server would actually see.

Your Own Personal Inferno

Was this overkill? Probably. Even with the aluminum heatsink, no change to thermal interface material, and zero ducting, we'd probably see no throttling under normal use in our server rack. But I wanted to be sure. Completely sure.

Is this extreme? Putting 140 TDP of CPU heat in a 1U server? Not really. Nick at Stack Overflow told me they just put two 22 core, 145W TDP Xeon 2699v4 CPUs and four 300W TDP GPUs in a single Dell C4130 1U server. I'd sure hate to be in the room when those fans spin up. I'm also a little afraid to find out what happens if you run MPrime plus full GPU load on that box.

Servers are an admittedly rare example of big CPU performance heat and size tradeoffs, one of the few left. It is fun to play at the extremes, but the SoC inside your phone makes the same tradeoffs on a smaller scale. Tiny infernos in our pockets, each and every one.

[advertisement] At Stack Overflow, we put developers first. We already help you find answers to your tough coding questions; now let us help you find your next job.
Categories: Programming

Agile Results Refresher for 2017

I’ve put together a quick refresher on Agile Results for 2017:

Agile Results Refresher for 2017

I tried to keep it simple and to the point, but at the same time, help new folks that don’t know what Agile Results is, really sink their teeth into it.

For example, one important idea is that it’s effectively a system to use your best energy for your best results.

I’ve seen people struggle with getting results for years, and one of the most common patterns I see is they use their worst energy for their most important activities.

Worse, they don’t know how to change their energy.

So now they are doing work they hate, because they feel like crap,and this feeling becomes a habit.

The irony is that they would enjoy their work if they just knew how to flip the switch and reimagine their work as an opportunity to experiment and explore their full potential.

Work is actually one of the ultimate forms of self-expression.

Your work can be your dojo where you practice building your abilities, creating your competencies, and sharpening your skills in all areas of your life.

But the real key is to bridge work and life through your values.

If you can find a way to bake your values into how you show up each day, whether at home or in the office, that’s the real secret to living the good life.

But what’s the key to living the great life?

The key to living the great life is to give your best where you have your best to give in the service of others.

Agile Results is a way to help you do that.

Check out the refresher on Agile Results and use the Rule of Three to rule your day.

If you already know Agile Results, teach three people and help them live and lead a more inspired life.

Game on.

Categories: Architecture, Programming

Quote of the Day

Herding Cats - Glen Alleman - Mon, 01/16/2017 - 16:57

MLK

We must learn to live together as brothers or perish together as fools

Categories: Project Management

Meet the 20 finalists of the Google Play Indie Games Contest

Android Developers Blog - Mon, 01/16/2017 - 12:15
Posted by Matteo Vallone, Google Play Games Business Development

Back in November, we launched the Google Play Indie Games Contest for developers from 15 European countries, to celebrate the passion and innovation of the indie community in the region. The contest will reward the winners with exposure to industry experts and players worldwide, as well as other prizes that will showcase their art and help them grow their business on Android and Google Play.

Thank you to the nearly 1000 of you who submitted high quality games in all types of genres! Your creativity, enthusiasm and dedication have once again impressed us and inspired us. We had a very fun time testing and judging the games based on fun, innovation, design excellence and technical and production quality, and it was challenging to select only 20 finalists:

Meet the 20 finalists
(In alphabetical order)

Blind Drive
(coming soon)

Lo-Fi People
Israel Causality
(coming soon)

Loju
United Kingdom Crap! I'm Broke: Out of Pocket
Arcane Circus Netherlands Egz

Lonely Woof
France Ellipsis

Salmi GmbH Germany Gladiabots


GFX47
France Happy Hop: Kawaii Jump

Platonic Games
Spain Hidden Folks (coming soon)

Adriaan de Jongh Netherlands Lichtspeer
(coming soon)

Lichthund
Poland Lost in Harmony
Digixart

Entertainment France Mr Future Ninja (coming soon)

Huijaus Studios
Finland Paper Wings


Fil Games
Turkey PinOut


Mediocre
Sweden Power Hover


Oddrok
Finland Reigns

Nerial
United Kingdom Rusty Lake: Roots


Rusty Lake Netherlands Samorost 3


Amanita Design Czech Republic The Battle of Polytopia

Midjiwan AB Sweden twofold inc.


Grapefrukt games Sweden Unworded (coming soon)

Bento Studio France
Check out the prizes

All the 20 finalists are getting:
  • The opportunity to exhibit and showcase their game at the final event held at the Saatchi Gallery in London, on 16th February 2017.
  • Promotion of their game on a London billboard for one month.
  • Two tickets to attend a 2017 Playtime event. This is an invitation-only event for top apps and games developers on Google Play.
  • One Pixel XL smartphone.
At the event at Saatchi, the finalists will also have a chance to make it to the next rounds and win additional prizes, including:
  • YouTube influencer campaigns worth up to 100,000 EUR.
  • Premium placements on Google Play.
  • Tickets to Google I/O 2017 and other top industry events.
  • Promotions on our channels.
  • Special prizes for the best Unity game.
  • And more!

Come support them at the final event

At the final event attendees will have a say on which 10 of these finalists will get to pitch their games to the jury, who will decide on the final contest winners who will receive the top prizes.

Register now to join us in London, meet the developers, check out their great games, vote for your favourites, and have fun with various industry experts and indie developers.



A big thank you again to everyone who entered and congratulations to the finalists. We look forward to seeing you at the Saatchi Gallery in London on 16th February.
Categories: Programming

Where to Look for Trends and Insights

“The best is yet to come.”

It can be tough creating the future among the chaos.

The key is to get a good handle on the real and durable trends that lie beneath the change and churn that’s all around you.

But how do you get a good handle on the key disruptions, the key trends, and the macro-level patterns that matter?

Draw from multiple sources that help you see the big picture in a simple way.

To get started, I’m going to share the key sources for trends and insights that I draw from (beyond my own experience and what I learn from working with customers and colleagues from around the world).

Here are the key sources for trends and insights that I draw from:

  1. Age of Context (Book), by Robert Scoble and Shel Israel.  Age of Context provides a walkthrough of 5 technological forces shaping our world: 1) mobile devices, 2) social media, 3) big data, 4) sensors, 5) location-based services.
  2. Cognizant – A global leader in business and technology services, helping clients bring the future of work to life — today.
  3. DaVini Institute – The DaVinci Institute is a non-profit futurist think tank. But unlike traditional research-based consulting organizations, the DaVinci Institute operates as a working laboratory for the future human experience A community of entrepreneurs and visionary thinkers intent on discovering the (future) opportunities created when cutting edge technology meets the rapidly changing human world.
  4. Faith Popcorn – The “Trend Oracle.”  Faith is a key strategist for BrainReserve and trusted advisor to the CEOs of The Fortune 500.  She’s identified movements such as, “Cocooning,” “AtmosFear,” “Anchoring,” “99 Lives,” “Icon Toppling” and “Vigilante Consumer.”
  5. Fjord – Fjord produces an annual report to help guide you through challenges, experiences, and opportunities you, your organization, employees, customers, and stakeholders will likely face.  Check out the Fjord Trends 2017 report on SlideShare.
  6. Foresight Factory (Formerly called Future Foundation) – Future focused, applied, global consumer insight. Universal trends that shape tastes and determine demand the world over; sector trends that are critical to success in specific industries; custom reports produced in partnership with clients and focus reports on key markets, regions and topics.
  7. Forrester – Research to help you make better decisions in a world where technology is radically changing your customer.
  8. Gartner – The the world’s leading information technology research and advisory company.
  9. Global Goals – In September 2015, 193 world leaders agreed to 17 Global Goals for Sustainable Development. If these Goals are completed, it would mean an end to extreme poverty, inequality and climate change by 2030.
  10. IBM Executive Exchange – An issues-based portal providing news, thought leadership, case studies, solutions, and social media exchange for C-level executives.
  11. Jim Carroll – A world-leading futurist, trends, and innovation expert, with a track record for strategic insight.  He is author of the book The Future Belongs to Those Who Are Fast, and he shares major trends, as well as trends by industry, on his site.
  12. Motley Fool – Motley Fool – To educate, amuse, and enrich.
  13. No Ordinary Disruption (Book) – This is a deep dive into the future, backed with data, stories, and insight.  It highlights four forces colliding and transforming the global economy: 1) the rise of emerging markets, 2) the accelerating impact of technology on the natural forces of market competition, 3) an aging world population, 4) accelerating flows of trade, capital, people, and data.
  14. O’Reilly Ideas – Insight, analysis, and research about emerging technologies.
  15. Richard Watson – A futurist author, speaker and scenario planner, and the chart maker behind The Table of Trends and Technologies for the World in 2020 (PDF). Watson is author of the What’s Next Top Trends Blog. Watson is the author of 4 books: Future Files, Future Minds, Futurevision, and The Future: 50 Ideas You Really Need to Know.
  16. Sandy Carter — Sandy Carter is IBM Vice President  of Social Business and Collaboration, and author of The New Language of Marketing 2.0, The New Language of Business, and Get Bold: Using Social Media to Create a New Type of Social Business.  She’s not just fun to read or watch – she has some of the best insight on social innovation.
  17. The Industries of the Future (Book), by Alec Ross.  Alec Ross explains what’s next for the world: the advances and stumbling blocks that will emerge in the next ten years, and how we can navigate them.
  18. The Second Machine Age, by Erik Brynjolfsson and Andrew McAfee.  Erik Brynjolfsson and Andrew McAfee identify the best strategies for survival and offer a new path to prosperity amid exponential technological change. These include revamping education so that it prepares people for the next economy instead of the last one, designing new collaborations that pair brute processing power with human ingenuity, and embracing policies that make sense in a radically transformed landscape.
  19. ThoughtWorks Technology Radar – Thoughts from the ThoughtWorks team on the technology and trends that are shaping the future.
  20. Trend Hunter – Each day, Trend Hunter features a daily dose of micro-trends, viral news and pop culture. The most popular micro-trends are featured on Trend Hunter TV and later grouped into clusters of inspiration in our Trend Reports, a series of tools for professional innovators and entrepreneurs.
  21. Trends and Technologies for the World in 2020 (PDF) – Table of trends and technologies shaping the world in 2020.
  22. Trendwatching.com – Trendwatching.com helps forward-thinking business professionals in 180+ countries understand the new consumer and subsequently uncover compelling, profitable innovation opportunities.

While it might look like a short-list, it’s actually pretty deep.

It’s like a Russian nesting doll in that each source might lead you to more sources or might be the trunk of a tree that has multiple branches.

These sources of trends and insights have served me well and continue to serve me as I look to the future and try to figure out what’s going on.

But more importantly, they all inspire me in some way to create the future, rather than wait for it to just happen.

I’m a big fan of making things happen … you play the world, or the world plays you.

You Might Also Like

All Digital Transformation Articles

Digital Transformation Books

Consumer Trend Canvas

Trend Framework

101 Hacks for a Better Year

Categories: Architecture, Programming

SPaMCAST 426 – SPaMCAST Round Table, Quality, Agile and Security

SPaMCAST Logo

http://www.spamcast.net

Listen Now
Subscribe on iTunes
Check out the podcast on Google Play Music

SPaMCAST 426 marks a milestone!  SPaMCAST 426 is the end of Year 10.  The Cast features our second annual round table.  Almost all of the SPaMCAST contributors gathered virtually to discuss a number of topics, including:

  1. Is software quality really one of the most important focuses in IT in 2017?
  2. Even though people are adopting agile, is agile a as principle-driven movement over?
  3. In 2017 will security trump quality and productivity?

The multiway discussion was exciting and informative! This was a great way to finish year 10 and get the motor primed for year 11!

Re-Read Saturday News

This week we begin the re-read of Carol Dweck’s Mindset: The New Psychology of Success. We will start slowly as I read ahead and give you time to find or buy a copy of the book.   I am reading the 2008 Ballantine Books Trade paperback edition version of the book (I had to re-buy the book as my first copy seems to have a new home).  

I was excited that the Software Process and Measurement Blog readers selected Mindset for Re-read Saturday.  I am looking forward to refreshing my understanding of the powerful ideas Dweck identifies as growth and fixed mindsets.  Mindsets are very useful for understanding why some people grow and others don’t and why some teams excel and other less so. Also, Mindset is easily the single most quoted book  I have seen in presentations at conferences for the past few years.

Next week we start in on Chapter One of the re-read of Carol Dweck’s Mindset, buy a copy this week.

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

Next SPaMCAST

The Software Process and Measurement Cast 427 begins Year 11 with an essay on the Post-Agile Age.  It is coming and it is a bed that human nature and commercial pressures has created. (Not sure what I mean?  Tune in, stream or download )  We will also have columns from Jon Quigley, Jeremy Berriault, and Kim Pries.  SPaMCAST 427 will celebrate the new SPaMCAST year in style!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.

 


Categories: Process Management

SPaMCAST 426 - SPaMCAST Round Table, Quality, Agile and Security

Software Process and Measurement Cast - Sun, 01/15/2017 - 23:00

SPaMCAST 426 marks a milestone!  SPaMCAST 426 is the end of Year 10.  The Cast features our second annual round table.  Almost all of the SPaMCAST contributors gathered virtually to discuss a number of topics, including:

  1. Is software quality really one of the most important focuses in IT in 2017?
  2. Even though people are adopting agile, is agile a as principle-driven movement over?
  3. In 2017 will security trump quality and productivity?

The multiway discussion was exciting and informative! This was a great way to finish year 10 and get the motor primed for year 11!

Re-Read Saturday News

This week we begin the re-read of Carol Dweck’s Mindset: The New Psychology of Success. We will start slowly as I read ahead and give you time to find or buy a copy of the book.   I am reading the 2008 Ballantine Books Trade paperback edition version of the book (I had to re-buy the book as my first copy seems to have a new home).  

I was excited that the Software Process and Measurement Blog readers selected Mindset for Re-read Saturday.  I am looking forward to refreshing my understanding of the powerful ideas Dweck identifies as growth and fixed mindsets.  Mindsets are very useful for understanding why some people grow and others don’t and why some teams excel and other less so. Also, Mindset is easily the single most quoted book  I have seen in presentations at conferences for the past few years.

Next week we start in on Chapter One of the re-read of Carol Dweck’s Mindset, buy a copy this week.

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

Next SPaMCAST

The Software Process and Measurement Cast 427 begins Year 11 with an essay on the Post-Agile Age.  It is coming and it is a bed that human nature and commercial pressures has created. (Not sure what I mean?  Tune in, stream or download )  We will also have columns from Jon Quigley, Jeremy Berriault, and Kim Pries.  SPaMCAST 427 will celebrate the new SPaMCAST year in style!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

Don't Build That Product

Xebia Blog - Sun, 01/15/2017 - 12:06
At the Agile Chef Conference I facilitated a workshop where participants could experience how Aikido can be used to resolve conflicts on the work floor as well by applying verbal Aikido. At the end of the session someone asked me to demonstrate the best defence against a sword attack; I responded by turning around and

Consumer Trend Canvas

Consumer Trends are a key building block for innovation.

Is you are stuck coming up with innovation opportunities, part of it is that you are missing sources of insight.

And of the best sources of insight is actually consumer trends.

One tool for helping you turn consumer trends into innovation opportunities is the Consumer Trend Canvas, by Trendwatching.com.

image

 

What I like about it is the simplicity, the elegance, and the fact that it’s similar in format to the Business Model Canvas.

The Consumer Trend Canvas is broken down into to simple sections:

  1. Analyze
  2. Apply

Pretty simple.

In terms of the overall canvas, it’s actually a map of the following 7 components:

  1. Basic Needs
  2. Drivers of Change
  3. Emerging Customer Expectations
  4. Inspiration
  5. Innovation Potential
  6. Who
  7. Your Innovations

From a narrative standpoint, you can think of it in terns of pains, needs, and desired outcomes for a particular persona, along with the innovation opportunities that flow from that simple frame.

The real beauty of the Consumer Trend Canvas is that it’s a question-driven approach to revealing innovation opportunities.

Here are the questions within each of the parts of the Consumer Trend Canvas:

  1. Which deep consumer needs & desires does this trend address?
  2. Why is this trend emerging now? What’s changing?
  3. What new consumer needs, wants, and expectations are created by the changes identified above? Where and how does this trends satisfy them?
  4. How are other businesses applying this trend?
  5. How and where could you apply this trend to your business?
  6. Which (new) customer groups could you apply this trend? What would you have to change?

When you put it all together, you have a quick and simple view of how a trend can lead to some potential innovations.

The power is in the simplicity and in the consolidation.

You Might Also Like

Trend Framework

8 Big Trends

10 High-Value Activities in the Enterprise

Hack a Happy New Year

Continuous Value Delivery the Agile Way

Categories: Architecture, Programming

Mindset: The New Psychology of Success, Reviews, Carol S. Dweck, Ph.D.: Re-Read Week 1, Basics and Introduction

Mindset Book Cover

This week we begin the re-read of Carol Dweck’s Mindset: The New Psychology of Success. We will start slowly as I read ahead and give you time to find or buy a copy of the book.   I am reading the 2008 Ballantine Books Trade paperback edition version of the book (I had to re-buy the book as my first copy seems to have a new home).  

I was excited that the Software Process and Measurement Blog readers selected Mindset for Re-read Saturday.  I am looking forward to refreshing my understanding of the powerful ideas Dweck identifies as growth and fixed mindsets.  Mindsets are very useful for understanding why some people grow and others don’t and why some teams excel and other less so. Also, Mindset is easily the single most quoted book in I have seen in presentations at conferences for the past few years.  

Reading Game Plan!  I am planning to review a chapter a week with a week for the introduction and logistics and a week for a wrap-up.  The math would suggest that the re-read will extend over 10 to 11 weeks, including today.  I am factoring in an off week for my trip to Mumbai, Delhi, and Agra (let me know if you are in one of those cities).  If you do not have a copy of the book, buy one (use this link to support the blog and podcast) and if you do have a copy find it and get your highlighter out!

Introduction

Dweck reminds us that psychology shows the power of people’s beliefs. We are shaped by our beliefs and biases. Even if we aren’t aware of those beliefs consciously, they strongly affect what we want and whether we succeeded in getting it. This premise is the core concept behind Mindset.  Each chapter in the book presents a set of findings and the accounts of people that support those findings.  At the end of each chapter, Dweck provides a set of ways to apply those lessons to recognize the mindset that is guiding your life, to understand how that mindset works, and then change that mindset it if you wish.

As a coach and mentor Mindset provides a solid framework, that combined with emotional intelligence, is useful to assess the person or team I am working with.  On a personal note, as I read ahead to prepare for this weekly feature, the concepts, and practical exercises have been useful as a tool for self-reflection.

Next week we begin the heavy lifting with Chapter One, which is titled Mindsets.  

 


Categories: Process Management

Stuff The Internet Says On Scalability For January 13th, 2017

Hey, it's HighScalability time:

 

So you think you're early to market! The Man Who Invented VR Goggles 50 Years Too Soon
If you like this sort of Stuff then please support me on Patreon.
  • 99.9: Percent PCs cheaper than in 1980; 300x20 miles: California megaflood; 7.5 million: articles published on Medium; 1 million: Amazon paid eBook downloads per day; 121: pages on P vs. NP; 79%: Americans use Facebook; 1,600: SpaceX satellites to fund a city on Mars; 

  • Quotable Quotes:
    • @GossiTheDog: How corporate security works: A) buy a firewall B) add a rule allowing all traffic C) the end How corporate security works:A) buy a firewall B) add a rule allowing all traffic C) the end
    • @caitie: Distributed Systems PSA: your regular reminder that the operational cost of a system should be included & considered when designing a system
    • @jimpjorps: 1998: the internet means you can "telecommute" to a tech job from anywhere on Earth 2017: everyone works in the same one square mile of SF
    • Jessi Hempel: [re: BitTorrent] Perhaps the lesson here is that sometimes technologies are not products. And they’re not companies. They’re just damn good technologies.
    • giltene: My new pet peeve: "how to make X faster: do less of X" recommendations.
    • peterwwillis: It used to be you had to actually break into a system to exfiltrate all its data. Now you just make an HTTP query.
    • Laralyn McWillams: Identify problems but focus on solutions. If you become more about problems than solutions, that negativity infects your work, your team, and how you think about your career.
    • Chris Fox: Apple is 100% a boutique retailer, meaning that a human chooses which books to promote. Without that, there was no organic discovery tool where readers could find your book.
    • vytah: In fact, the 1986 [Chernobyl] disaster happened because the engineers decided to get rid of safeguards and run tests.
    • Eric Elliott: Breaking into a user’s top 5 apps is like getting struck by lightning or winning the lottery. Don’t bank on it.
    • Peter: I say the super-intelligent aliens will be powered by hyper-computation, a technology that makes our concept of computation look like counting on your fingers; and they’ll have not only qualia, but hyper-qualia, experiential phenomenologica whose awesomeness we cannot even speak of.
    • SEJeff: LVS is pretty much the undisputed king for serious business load balancing. I've heard (anecdotally) that Uber uses gorb[1] and google has released seesaw, which are both fancy wrappers ontop of LVS for load balancing.
    • k__: I have the feeling this is haunting my life. Jobs, relationships, everything. When I got something, it didn't feel that hard to get it. When I try to get something it feels impossible.
    • Nelson Elhage: One of my favorite concepts when thinking about instrumenting a system to understand its overall performance and capacity is what I call “time utilization”. By this I mean: If you look at the behavior of a thread over some window of time, what fraction of its time is spent in each “kind” of work that it does?
    • Bart Sano (Google): I can say that we are committed to the choice of these different architectures, including X86 – and that includes AMD – as well as Power and ARM. The principle that we are investing in heavily is that competition breeds innovation, 
    • aaron-lebo: This is a larger issue with developer burnout I suspect. You master one thing and there's someone standing on the corner saying..."well, actually, I've got something better" and there's a very real anxiety in that evaluation process. Does object-oriented programming suck? Are functional languages the future? Do you really want an SPA? Should you replace your C codebase with Rust... or Go? Is Bitcoin worth getting in on? etc etc
    • StorageMojo: [re: Violin’s bankruptcy] The race is not always to the swift, nor riches to the wise. By starting with software, other companies built an early lead, and now have the money and time to optimize hardware for flash.
    • nocarrier: [Why no datacenters in India?] Cost was a smaller factor than politics; the Indian government wanted the private keys for our certs in order to let FB put a POP there. That was an absolute dealbreaker, so we served India from Singapore and other POPs in nearby countries.
    • RDX: So that original post, although long and full of real examples, was not about Javascript fatigue really. Its change fatigue. Let’s be clear, if you’re picking something new, you’re making a conscious choice to grow up with it.
    • @jamesurquhart: Amazing that emergent tech that’ll revolutionize software dev is already almost a commodity utility service. #streaming #serverless #events

  • The Ethics of Autonomous Cars. The obvious revenue model is highest bidder lives. During the first few milliseconds of a crash response a real-time bidding session is created and the lowest bidder assumes the risk. That at least captures the zeitgeist of the times.

  • First Go. Now poker. DeepStack: Expert-Level Artificial Intelligence in No-Limit Poker. Thank the force humans are still unbeatable at Sabacc. 

  • Medium may be the first YA (Young Adult, think Hunger Games) style publishing outlet. YA is often written in first-person present. It's a good way to fake authenticity. Traditional publications use third-person past tense, but that's not what works best on Medium. What I learned from analyzing the top 252 Medium stories of 2016: The words “you” and “I” were by far the most common, which suggests that addressing the reader directly as an individual person is a better writing strategy than writing in third person.

  • Ben Kehoe says AWS Step Functions is not the cheap, high-scale state machines using an event-driven paradigm he has been looking for. FaaS is stateless, and AWS Step Functions provides state as-a-Service: at $0.025 per 1,000 executions, it’s 125 times more expensive per invocation than Lambda; it’s not going to be cost-effective to replace existing roll-your-own Lambda solutions; the default throttling limit for a state machine is two executions per second...it’s not built to handle massively scaled but transient event scheduling.

  • Ransomware has shifted to being a reproducible strategy. @SteveD3Since I fist covered the MongoDB hacking on Jan 3, the number of compromised DBs has surpassed 32,000. Now possibly Elasticsearch. Anything you can find basically with Shodan. Which is why we now have @GossiTheDog: Found out today firms have started doing legal contracts which specifically rule out liability if they get hit by ransomware, naming it.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture