Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://softdevblogs.com/' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Modifying email signatures with the Gmail API

Google Code Blog - 9 hours 46 min ago
Originally posted on G Suite Developer blog

Posted by Wesley Chun (@wescpy), Developer Advocate, G Suite

The Gmail API team introduced a new settings feature earlier this year, and today, we're going to explore some of that goodness, showing developers how to update Gmail user settings with the API.

Email continues to be a dominant form of communication, personally and professionally, and our email signature serves as both a lightweight introduction and a business card. It's also a way to slip-in a sprinkling of your personality. Wouldn't it be interesting if you could automatically change your signature whenever you wanted without using the Gmail settings interface every time? That is exactly what our latest video is all about.

If your app has already created a Gmail API service endpoint, say in a variable named GMAIL, and you have the YOUR_EMAIL email address whose signature should be changed as well as the text of the new signature, updating it via the API is as pretty straightforward, as illustrated by this Python call to the GMAIL.users().settings().sendAs().patch() method:

signature = {'signature': '"I heart cats."  ~anonymous'}
GMAIL.users().settings().sendAs().patch(userId='me',
sendAsEmail=YOUR_EMAIL, body=signature).execute()

For more details about the code sample used in the requests above as well as in the video, check out the deepdive post. In addition to email signatures, other settings the API can modify include: filters, forwarding (addresses and auto-forwarding), IMAP and POP settings to control external email access, and the vacation responder. Be aware that while API access to most settings are available for any G Suite Gmail account, a few sensitive operations, such as modifying send-as aliases or forwarding, are restricted to users with domain-wide authority.

Developers interested in using the Gmail API to access email threads and messages instead of settings can check out this other video where we show developers how to search for threads with a minimum number of messages, say to look for the most discussed topics from a mailing list. Regardless of your use-case, you can find out more about the Gmail API in the developer documentation. If you're new to the API, we suggest you start with the overview page which can point you in the right direction!

Be sure to subscribe to the Google Developers channel and check out other episodes in the G Suite Dev Show video series.

Categories: Programming

#NoEstmates and the System of Profound Ignorance (SOPI)

Herding Cats - Glen Alleman - 12 hours 53 min ago

Deming's approach to problem solving is based on The System of Profound Knowledge (SOPK). The anti-pattern to SOPK is The System of Profound Ignorance (SOPI). 

To obtain a System of Profound Knowledge, we need to start with the four components - appreciation of the system, theory of knowledge, knowledge of variation, and the psychology of the people and process working within the system.

  • A solid understanding of each component from a conceptual perspective:
    • What is the variation?
    • Why do we need to know about it?
    • How will these variations favorably or unfavorably impact the outcomes of our work?
  • A solid understanding of how each component interacts with the other three, from a conceptual perspective (e.g., how does a theory of knowledge impact how you interpret a system's behavior?)
  • A solid understanding of the above as applied to a domain (e.g., software development).
  • A solid understanding of the above as applied to a specific organization (e.g., our team).

While knowledge of these understandings are standard for any credible management process of other people's money are needed, there is the inverse of this knowledge in play for the #Noestimates advocates.

The leadership of this notion that decisions can be made in the presence of uncertainty without estimating seem to have missed the core knowledge of the principles of decision making. The opposite of knowledge is ignorance, so the framework of #NoEstimates is an anti-SOPK.

  • Lack of Appreciation for the System of business management - business management is a closed loop process to maximize the Value in exchange for the Cost. Estimates of how much is it going to cost to produce the value - which is itself an uncertain outcome means the business must estimate both. Decision Analysis for the Professional is a good place to start for the business.
  • Lack of knowledge of Variation - natural variations and event based variations come from uncertainty. These uncertainties are the source of risk. And we have to remember Tim Lister's quote Risk Management is How Adults Manage Projects. No risk management means no uncertainty. 
  • Lack of a Theory of Knowledge - opinion and gut-feel rule the conversation. There is no incentive to going where the data is. 
  • Lack of appreciation of the needs of those paying for the work - the suggestion that estimates are waste doesn't define who this is a waste for. It may well be that coders see estimating as a waste. But it's not their money. Want to get paid? We need to know how much your work will cost, in the beginning, and during the work and as we approach the end. This is the Estimate to Complete and Estimate at Completion information needed by the business to make decisions. Waiting till the end of the project to know these is nonsense.
Related articles Some More Background on Probability, Needed for Estimating Risk Management is How Adults Manage Projects Herding Cats: Estimates, Forecasts, Projections Making Decisions In The Presence of Uncertainty Estimating and Making Decisions in Presence of Uncertainty Making Conjectures Without Testable Outcomes Quote of the Day Do The Math The Microeconomics of a Project Driven Organization
Categories: Project Management

The Impostor Software Developer Syndrome

From the Editor of Methods & Tools - 13 hours 7 min ago
Did you ever feel like a fraud as a software developer? Have the feeling that at some point, someone is going to find out that you really don’t belong where you are? That you are not as smart as other people think? You are not alone with this; many high-achieving people suffer from the imposter […]

Final update to Android 7.1 Developer Preview

Android Developers Blog - 22 hours 37 min ago

Posted by Dave Burke, VP of Engineering

Today we're rolling out an update to the Android 7.1 Developer Preview -- the last before we release the final Android 7.1.1 platform to the ecosystem. Android 7.1.1 includes the developer features already available on Pixel and Pixel XL devices and adds optimizations and bug fixes on top of the base Android 7.1 platform. With Developer Preview 2, you can make sure your apps are ready for Android 7.1.1 and the consumers that will soon be running it on their devices.

As highlighted in October, we're also expanding the range of devices that can receive this Developer Preview update to Nexus 5X, Nexus 6P, Nexus 9, and Pixel C.

If you have a supported device that's enrolled in the Android Beta Program, you'll receive an update to Developer Preview 2 over the coming week. If you haven't enrolled your device yet, just visit the site to enroll your device and get the update.

In early December, we'll roll out Android 7.1.1 to the full lineup of supported devices as well as Pixel and Pixel XL devices.

What's in this update?

Developer Preview 2 is a release candidate for Android 7.1.1 that you can use to complete your app development and testing in preparation for the upcoming final release. In includes near-final system behaviors and UI, along with the latest bug fixes and optimizations across the system and Google apps.

It also includes the developer features and APIs (API level 25) already introduced in Developer Preview 1. If you haven't explored the developer features, you'll want to take a look at app shortcuts, round icon resources, and image keyboard support, among others -- you can see the full list of developer features here.

With Developer Preview 2, we're also updating the SDK build and platform tools in Android Studio, the Android 7.1.1 platform, and the API Level 25 emulator system images. The latest version of the support library (25.0.1) is also available for you to add image keyboard support, bottom navigation, and other features for devices running API Level 25 or earlier.

For details on API Level 25 check out the API diffs and the updated API reference on the developer preview site.

Get your apps ready for Android 7.1

Now is the time to optimize your apps to look their best on Android 7.1.1. To get started, update to Android Studio 2.2.2 and then download the API Level 25 platform, emulator system images, and tools through the SDK Manager in Android Studio.

After installing the API Level 25 SDK, you can update your project's compileSdkVersion to 25 to build and test against the new APIs. If you're doing compatibility testing, we recommend updating your app's targetSdkVersion to 25 to test your app with compatibility behaviors disabled. For details on how to set up your app with the API Level 25 SDK, see Set up the Preview.

If you're adding app shortcuts or circular launcher icons to your app, you can use Android Studio's built-in Image Asset Studio to quickly help you create icons of different sizes that meet the material design guidelines. You can test your round icons on the Google APIs emulator for API Level 25, which includes support for round icons and the new Google Pixel Launcher.

table.GeneratedTable td, table.GeneratedTable th { border-collapse: collapse; border-width: 1px; border-color: #000000; border-style: solid; }

Android Studio and the Google APIs emulator let you quickly create and test your round icon assets.

If you're adding image keyboard support, you can use the Messenger and Google Keyboard apps included in the preview system images for testing as they include support for this new API.

Scale your tests using Firebase Test Lab for Android

To help scale your testing, make sure to take advantage of Firebase Test Lab for Android and run your tests in the cloud at no charge during the preview period on all virtual devices including the Developer Preview 2 (API 25). You can use the automated crawler (Robo Test) to test your app without having to write any test scripts, or you can upload your own instrumentation (e.g. Espresso) tests. You can upload your tests here.

Publish your apps to alpha, beta or production channels in Google Play

After you've finished final testing, you can publish your updates compiled against, and optionally targeting, API 25 to Google Play. You can publish to your alpha, beta, or even production channels in the Google Play Developer Console. In this way, push your app updates to users whose devices are running Android 7.1, such as Pixel and Android Beta devices.

Get Developer Preview 2 on Your Eligible Device

If you have an eligible device that's already enrolled in the Android Beta Program, the device will get the Developer Preview 2 update over the coming week. No action is needed on your part. If you aren't yet enrolled in program, the easiest way to get started is by visiting android.com/beta and opt-in your eligible Android phone or tablet -- you'll soon receive this preview update over-the-air. As always, you can also download and flash this update manually.

As mentioned above, this Developer Preview update is available for Nexus 5X, Nexus 6P, Nexus 9, and Pixel C devices.

We're expecting to launch the final release of the Android 7.1.1 in just a few weeks Starting in December, we'll roll out Android 7.1.1 to the full lineup of supported preview devices, as well as the recently launched Pixel and Pixel XL devices. At that time, we'll also push the sources to AOSP, so our device manufacturer partners can bring this new platform update to consumers on their devices.

Meanwhile, we continue to welcome your feedback in the Developer Preview issue tracker, N Preview Developer community, or Android Beta community as we work towards the final consumer release in December!

Categories: Programming

Post Agile Age: Drivers of the End of the Agile Movement and Method Lemmings

Follow the leader?

                                                              Follow the leader?

I had a conversation with Mauricio Aguiar of ti Metricas earlier this week discussing the cycle of change in software development.  In the end, there is only one absolute.  The person paying the bill wants value, always more value.  The Agile movement is just the current iteration cycle in the search for the tools to deliver more value. The movement marked and driven by the Agile Manifesto has had a great run.  Agile as a movement provided a new framework to think about how work should or could be approached. However, the movement driven by values and principles has faded to be replaced by a focus on frameworks and techniques.  This new focus is neither good nor bad, but rather an evolution and step towards the next big thing. There are four major factors that contributed to the end of Agile as movement:

  1. Method Lemmings – Just doing Agile, and therefore often doing Agile inappropriately.
  2. Proscriptive Norms – Defining boundaries around methods that reduce flexibility.
  3. A Brand Driven Eco-Systems – Splintering of schools of thought driven by economic competition. 
  4. A Lack of Systems Thinking/Management – A resurgence of managing people and steps rather than taking a systems view.

Every major trend in IT has been impacted these drivers.  Interestingly, they have tended to appear in the same order as each new movement has appeared and then evolved. 

Method Lemmings:

The idea of method lemmings was introduced to me by Larry Cooper, creator and force behind the Agility Series (interviewed on SPaMCAST 418).  Larry used to the term ‘method lemmings’ to describe the group of practitioners that have a need to be seen to be doing what the “cool” people are doing. Stated in a little less inflammatory manner, method lemmings are those in the classic product acceptance life cycle that in the early and late majority phase of the cycle are doing Agile because everyone else is doing Agile.

product-adoption-lifecycle

Any product or movement will ride the product adoption lifecycle.  The slope of the ascent (and probably the decent) will be a reflection of the degree the product or idea that catches the imagination of its target market.  The bigger the frenzy the more people that jump on the bandwagon because of the coolness factor.  These are the people Larry classified as method lemmings.  In Agile there has been a passionate discussion about the difference between doing Agile and being Agile. Those that are Agile embrace the principles and then fit practices to the work; those that do are more apt to apply techniques in rote or inappropriately.  For example, this morning a friend who owns a medium-sized consulting firm approached several firms to handle their standard payroll. We end discussed the bid from two the firms.  Both were equally prominent in the marketplace and had many years in the industry. One firm suggested that since they used Agile they would create a backlog for the conversion but could not commit to a price or date for the conversion. Another that also used Agile stated that a payroll conversion was a common project and that because they used Agile and Lean techniques they could quote a price and commit to being ready for a specific date.  I would suggest that the later was being Agiler (if that really is a word) than the former although both were probably using similar techniques. The later firm got the business because the first organization’s approach seemed to put techniques in front of delivering value. In this case, at the very least, just doing Agile techniques without understanding the principles, which are value focused,  did not appear to add value to the customer. Just following the pack over the cliff leads to problems and fails to deliver value to customers, which weakens the value of adopting Agile!

Planned essays in Post Agile Age Arc include:

  1. Post Agile Age: The Movement Is Dead
  2. Post Agile Age: Drivers of the End of the Agile Movement and Method Lemmings (Current)
  3. Proscriptive Norms
  4. A Brand Driven Eco
  5. A Lack of Systems Thinking/Management
  6. The Age of Aquarius (Something Better is Beginning)

Categories: Process Management

Using Helm to install Traefik as an Ingress Controller in Kubernetes

Agile Testing - Grig Gheorghiu - Tue, 12/06/2016 - 23:15
That was a mouthful of a title...Hope this post lives up to it :)

First of all, just a bit of theory. If you want to expose your application running on Kubernetes to the outside world, you have several choices.

One choice you have is to expose the pods running your application via a Service of type NodePort or LoadBalancer. If you run your service as a NodePort, Kubernetes will allocate a random high port on every node in the cluster, and it will proxy traffic to that port to your service. Services of type LoadBalancer are only supported if you run your Kubernetes cluster using certain specific cloud providers such as AWS and GCE. In this case, the cloud provider will create a specific load balancer resource, for example an Elastic Load Balancer in AWS, which will then forward traffic to the pods comprising your service. Either way, the load balancing you get by exposing a service is fairly crude, at the TCP layer and using a round-robin algorithm.

A better choice for exposing your Kubernetes application is to use Ingress resources together with Ingress Controllers. An ingress resource is a fancy name for a set of layer 7 load balancing rules, as you might be familiar with if you use HAProxy or Pound as a software load balancer. An Ingress Controller is a piece of software that actually implements those rules by watching the Kubernetes API for requests to Ingress resources. Here is a fragment from the Ingress Controller documentation on GitHub:

What is an Ingress Controller?

An Ingress Controller is a daemon, deployed as a Kubernetes Pod, that watches the ApiServer's /ingresses endpoint for updates to the Ingress resource. Its job is to satisfy requests for ingress.
Writing an Ingress Controller

Writing an Ingress controller is simple. By way of example, the nginx controller does the following:
  • Poll until apiserver reports a new Ingress
  • Write the nginx config file based on a go text/template
  • Reload nginx
As I mentioned in a previous post, I warmly recommend watching a KubeCon presentation from Gerred Dillon on "Kubernetes Ingress: Your Router, Your Rules" if you want to further delve into the advantages of using Ingress Controllers as opposed to plain Services.
While nginx is the only software currently included in the Kubernetes source code as an Ingress Controller, I wanted to experiment with a full-fledged HTTP reverse proxy such as Traefik. I should add from the beginning that only nginx offers the TLS feature of Ingress resources. Traefik can terminate SSL of course, and I'll show how you can do that, but it is outside of the Ingress resource spec.

I've also been looking at Helm, the Kubernetes package manager, and I noticed that Traefik is one of the 'stable' packages (or Charts as they are called) currently offered by Helm, so I went the Helm route in order to install Traefik. In the following instructions I will assume that you are already running a Kubernetes cluster in AWS and that your local kubectl environment is configured to talk to that cluster.

Install Helm

This is pretty easy. Follow the instructions on GitHub to download or install a binary for your OS.

Initialize Helm

Run helm init in order to install the server component of Helm, called tiller, which will be run as a Kubernetes Deployment in the kube-system namespace of your cluster.

Get the Traefik Helm chart from GitHub

I git cloned the entire kubernetes/charts repo, then copied the traefik directory locally under my own source code repo which contains the rest of the yaml files for my Kubernetes resource manifests.

# git clone https://github.com/kubernetes/charts.git helmcharts# cp -r helmcharts/stable/traefik traefik-helm-chart
It is instructive to look at the contents of a Helm chart. The main advantage of a chart in my view is the bundling together of all the Kubernetes resources necessary to run a specific set of services. The other advantage is that you can use Go-style templates for the resource manifests, and the variables in those template files can be passed to helm via a values.yaml file or via the command line.
For more details on Helm charts and templates, I recommend this linux.com article.
Create an Ingress resource for your application service
I copied the dashboard-ingress.yaml template file from the Traefik chart and customized it so as to refer to my application's web service, which is running in a Kubernetes namespace called tenant1.

# cd traefik-helm-chart/templates# cp dashboard-ingress.yaml web-ingress.yaml# cat web-ingress.yaml{{- if .Values.tenant1.enabled }}apiVersion: extensions/v1beta1kind: Ingressmetadata:  namespace: {{ .Values.tenant1.namespace }}  name: {{ template "fullname" . }}-web-ingress  labels:    app: {{ template "fullname" . }}    chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"    release: "{{ .Release.Name }}"    heritage: "{{ .Release.Service }}"spec:  rules:  - host: {{ .Values.tenant1.domain }}    http:      paths:      - path: /        backend:          serviceName: {{ .Values.tenant1.serviceName }}          servicePort: {{ .Values.tenant1.servicePort }}{{- end }}
The variables referenced in the template above are defined in the values.yaml file in the Helm chart. I started with the variables in the values.yaml file that came with the Traefik chart and added my own customizations:
# vi traefik-helm-chart/values.yamlssl:  enabled: trueacme:  enabled: true  email: admin@mydomain.com  staging: false  # Save ACME certs to a persistent volume. WARNING: If you do not do this, you will re-request  # certs every time a pod (re-)starts and you WILL be rate limited!  persistence:    enabled: true    storageClass: kubernetes.io/aws-ebs    accessMode: ReadWriteOnce    size: 1Gidashboard:  enabled: true  domain: tenant1-lb.dev.mydomain.comgzip:  enabled: falsetenant1:  enabled: true  namespace: tenant1  domain: tenant1.dev.mydomain.com  serviceName: web  servicePort: http
Note that I added a section called tenant1, where I defined the variables referenced in the web-ingress.yaml template above. I also enabled the ssl and acme sections, so that Traefik can automatically install SSL certificates from Let's Encrypt via the ACME protocol.
Install your customized Helm chart for Traefik
With these modifications done, I ran 'helm install' to actually deploy the various Kubernetes resources included in the Traefik chart. 
I specified the directory containing my Traefik chart files (traefik-helm-chart) as the last argument passed to helm install:
# helm install --name tenant1-lb --namespace tenant1 traefik-helm-chart/NAME: tenant1-lbLAST DEPLOYED: Tue Nov 29 09:51:12 2016NAMESPACE: tenant1STATUS: DEPLOYED
RESOURCES:==> extensions/IngressNAME                                  HOSTS                    ADDRESS   PORTS     AGEtenant1-lb-traefik-web-ingress   tenant1.dev.mydomain.com             80        1stenant1-lb-traefik-dashboard   tenant1-lb.dev.mydomain.com             80        0s
==> v1/PersistentVolumeClaimNAME                    STATUS    VOLUME    CAPACITY   ACCESSMODES   AGEtenant1-lb-traefik-acme   Pending                                      0s
==> v1/SecretNAME                            TYPE      DATA      AGEtenant1-lb-traefik-default-cert   Opaque    2         1s
==> v1/ConfigMapNAME               DATA      AGEtenant1-lb-traefik   1         1s
==> v1/ServiceNAME                         CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGEtenant1-lb-traefik-dashboard   10.3.0.15    <none>        80/TCP    1stenant1-lb-traefik   10.3.0.215   <pending>   80/TCP,443/TCP   1s
==> extensions/DeploymentNAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGEtenant1-lb-traefik   1         1         1            0           1s

NOTES:1. Get Traefik's load balancer IP/hostname:
    NOTE: It may take a few minutes for this to become available.
    You can watch the status by running:
        $ kubectl get svc tenant1-lb-traefik --namespace tenant1 -w
    Once 'EXTERNAL-IP' is no longer '<pending>':
        $ kubectl describe svc tenant1-lb-traefik --namespace tenant1 | grep Ingress | awk '{print $3}'
2. Configure DNS records corresponding to Kubernetes ingress resources to point to the load balancer IP/hostname found in step 1
At this point you should see two Ingress resources, one for the Traefik dashboard and on for the custom web ingress resource:
# kubectl --namespace tenant1 get ingressNAME                           HOSTS                       ADDRESS   PORTS     AGEtenant1-lb-traefik-dashboard   tenant1-lb.dev.mydomain.com           80        50stenant1-lb-traefik-web-ingress tenant1.dev.mydomain.com            80        51s
As per the Helm notes above (shown as part of the output of helm install), run this command to figure out the CNAME of the AWS ELB created by Kubernetes during the creation of the tenant1-lb-traefik service of type LoadBalancer:
# kubectl describe svc tenant1-lb-traefik --namespace tenant1 | grep Ingress | awk '{print $3}'a5be275d8b65c11e685a402e9ec69178-91587212.us-west-2.elb.amazonaws.com
Create tenant1.dev.mydomain.com and tenant1-lb.dev.mydomain.com as DNS CNAME records pointing to a5be275d8b65c11e685a402e9ec69178-91587212.us-west-2.elb.amazonaws.com.

Now, if you hit http://tenant1-lb.dev.mydomain.com you should see the Traefik dashboard showing the frontends on the left and the backends on the right:
Screen Shot 2016-11-29 at 10.54.07 AM.pngIf you hit http://tenant1.dev.mydomain.com you should see your web service in action.
You can also inspect the logs of the tenant1-lb-traefik pod to see what's going on under the covers when Traefik is launched and to verify that the Let's Encrypt SSL certificates were properly downloaded via ACME:
# kubectl --namespace tenant1 logs tenant1-lb-traefik-3710322105-o2887time="2016-11-29T00:03:51Z" level=info msg="Traefik version v1.1.0 built on 2016-11-18_09:20:46AM"time="2016-11-29T00:03:51Z" level=info msg="Using TOML configuration file /config/traefik.toml"time="2016-11-29T00:03:51Z" level=info msg="Preparing server http &{Network: Address::80 TLS:<nil> Redirect:<nil> Auth:<nil> Compress:false}"time="2016-11-29T00:03:51Z" level=info msg="Preparing server https &{Network: Address::443 TLS:0xc4201b1800 Redirect:<nil> Auth:<nil> Compress:false}"time="2016-11-29T00:03:51Z" level=info msg="Starting server on :80"time="2016-11-29T00:03:58Z" level=info msg="Loading ACME Account..."time="2016-11-29T00:03:59Z" level=info msg="Loaded ACME config from store /acme/acme.json"time="2016-11-29T00:04:01Z" level=info msg="Starting provider *main.WebProvider {\"Address\":\":8080\",\"CertFile\":\"\",\"KeyFile\":\"\",\"ReadOnly\":false,\"Auth\":null}"time="2016-11-29T00:04:01Z" level=info msg="Starting provider *provider.Kubernetes {\"Watch\":true,\"Filename\":\"\",\"Constraints\":[],\"Endpoint\":\"\",\"DisablePassHostHeaders\":false,\"Namespaces\":null,\"LabelSelector\":\"\"}"time="2016-11-29T00:04:01Z" level=info msg="Retrieving ACME certificates..."time="2016-11-29T00:04:01Z" level=info msg="Retrieved ACME certificates"time="2016-11-29T00:04:01Z" level=info msg="Starting server on :443"time="2016-11-29T00:04:01Z" level=info msg="Server configuration reloaded on :80"time="2016-11-29T00:04:01Z" level=info msg="Server configuration reloaded on :443"
To get an even better warm and fuzzy feeling about the SSL certificates installed via ACME, you can run this command against the live endpoint tenant1.dev.mydomain.com:
# echo | openssl s_client -showcerts -servername tenant1.dev.mydomain.com -connect tenant1.dev.mydomain.com:443 2>/dev/nullCONNECTED(00000003)---Certificate chain 0 s:/CN=tenant1.dev.mydomain.com   i:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3-----BEGIN CERTIFICATE-----MIIGEDCCBPigAwIBAgISAwNwBNVU7ZHlRtPxBBOPPVXkMA0GCSqGSIb3DQEBCwUA-----END CERTIFICATE----- 1 s:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3   i:/O=Digital Signature Trust Co./CN=DST Root CA X3-----BEGIN CERTIFICATE-----uM2VcGfl96S8TihRzZvoroed6ti6WqEBmtzw3Wodatg+VyOeph4EYpr/1wXKtx8/KOqkqm57TH2H3eDJAkSnh6/DNFu0Qg==-----END CERTIFICATE--------Server certificatesubject=/CN=tenant1.dev.mydomain.comissuer=/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3---No client certificate CA names sent---SSL handshake has read 3009 bytes and written 713 bytes---New, TLSv1/SSLv3, Cipher is AES128-SHAServer public key is 4096 bitSecure Renegotiation IS supportedCompression: NONEExpansion: NONESSL-Session:    Protocol  : TLSv1    Cipher    : AES128-SHA    Start Time: 1480456552    Timeout   : 300 (sec)    Verify return code: 0 (ok)etc.
Other helm commands
You can list the Helm releases that are currently running (a Helm release is a particular versioned instance of a Helm chart) with helm list:
# helm listNAME        REVISION UPDATED                  STATUS   CHARTtenant1-lb    1        Tue Nov 29 10:13:47 2016 DEPLOYED traefik-1.1.0-a

If you change any files or values in a Helm chart, you can apply the changes by means of the 'helm upgrade' command:

# helm upgrade tenant1-lb traefik-helm-chart
You can see the status of a release with helm status:
# helm status tenant1-lbLAST DEPLOYED: Tue Nov 29 10:13:47 2016NAMESPACE: tenant1STATUS: DEPLOYED
RESOURCES:==> v1/ServiceNAME               CLUSTER-IP   EXTERNAL-IP        PORT(S)          AGEtenant1-lb-traefik   10.3.0.76    a92601b47b65f...   80/TCP,443/TCP   35mtenant1-lb-traefik-dashboard   10.3.0.36   <none>    80/TCP    35m
==> extensions/DeploymentNAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGEtenant1-lb-traefik   1         1         1            1           35m
==> extensions/IngressNAME                                  HOSTS                    ADDRESS   PORTS     AGEtenant1-lb-traefik-web-ingress   tenant1.dev.mydomain.com             80        35mtenant1-lb-traefik-dashboard   tenant1-lb.dev.mydomain.com             80        35m
==> v1/PersistentVolumeClaimNAME                    STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGEtenant1-lb-traefik-acme   Bound     pvc-927df794-b65f-11e6-85a4-02e9ec69178b   1Gi        RWO           35m
==> v1/SecretNAME                            TYPE      DATA      AGEtenant1-lb-traefik-default-cert   Opaque    2         35m
==> v1/ConfigMapNAME               DATA      AGEtenant1-lb-traefik   1         35m




Saving Data: Reducing the size of App Updates by 65%

Android Developers Blog - Tue, 12/06/2016 - 22:46

Posted by Andrew Hayden, Software Engineer on Google Play

Android users are downloading tens of billions of apps and games on Google Play. We're also seeing developers update their apps frequently in order to provide users with great content, improve security, and enhance the overall user experience. It takes a lot of data to download these updates and we know users care about how much data their devices are using. Earlier this year, we announced that we started using the bsdiff algorithm (by Colin Percival). Using bsdiff, we were able to reduce the size of app updates on average by 47% compared to the full APK size.

Today, we're excited to share a new approach that goes further — File-by-File patching. App Updates using File-by-File patching are, on average, 65% smaller than the full app, and in some cases more than 90% smaller.

The savings, compared to our previous approach, add up to 6 petabytes of user data saved per day!

In order to get the new version of the app, Google Play sends your device a patch that describes the differences between the old and new versions of the app.

Imagine you are an author of a book about to be published, and wish to change a single sentence - it's much easier to tell the editor which sentence to change and what to change, rather than send an entirely new book. In the same way, patches are much smaller and much faster to download than the entire APK.

Techniques used in File-by-File patching

Android apps are packaged as APKs, which are ZIP files with special conventions. Most of the content within the ZIP files (and APKs) is compressed using a technology called Deflate. Deflate is really good at compressing data but it has a drawback: it makes identifying changes in the original (uncompressed) content really hard. Even a tiny change to the original content (like changing one word in a book) can make the compressed output of deflate look completely different. Describing the differences between the original content is easy, but describing the differences between the compressed content is so hard that it leads to inefficient patches.

Watch how much the compressed text on the right side changes from a one-letter change in the uncompressed text on the left:

File-by-File therefore is based on detecting changes in the uncompressed data. To generate a patch, we first decompress both old and new files before computing the delta (we still use bsdiff here). Then to apply the patch, we decompress the old file, apply the delta to the uncompressed content and then recompress the new file. In doing so, we need to make sure that the APK on your device is a perfect match, byte for byte, to the one on the Play Store (see APK Signature Schema v2 for why).

When recompressing the new file, we hit two complications. First, Deflate has a number of settings that affect output; and we don't know which settings were used in the first place. Second, many versions of deflate exist and we need to know whether the version on your device is suitable.

Fortunately, after analysis of the apps on the Play Store, we've discovered that recent and compatible versions of deflate based on zlib (the most popular deflate library) account for almost all deflated content in the Play Store. In addition, the default settings (level=6) and maximum compression settings (level=9) are the only settings we encountered in practice.

Knowing this, we can detect and reproduce the original deflate settings. This makes it possible to uncompress the data, apply a patch, and then recompress the data back to exactly the same bytes as originally uploaded.

However, there is one trade off; extra processing power is needed on the device. On modern devices (e.g. from 2015), recompression can take a little over a second per megabyte and on older or less powerful devices it can be longer. Analysis so far shows that, on average, if the patch size is halved then the time spent applying the patch (which for File-by-File includes recompression) is doubled.

For now, we are limiting the use of this new patching technology to auto-updates only, i.e. the updates that take place in the background, usually at night when your phone is plugged into power and you're not likely to be using it. This ensures that users won't have to wait any longer than usual for an update to finish when manually updating an app.

How effective is File-by-File Patching?

Here are examples of app updates already using File-by-File Patching:


Application Original Size Previous (BSDiff) Patch Size (% vs original) File-by-File Patch Size (% vs original) Farm Heroes Super Saga 71.1 MB 13.4 MB (-81%) 8.0 MB (-89%) Google Maps 32.7 MB 17.5 MB (-46%) 9.6 MB (-71%) Gmail 17.8 MB 7.6 MB (-57%) 7.3 MB (-59%) Google TTS 18.9 MB 17.2 MB (-9%) 13.1 MB (-31%) Kindle 52.4 MB 19.1 MB (-64%) 8.4 MB (-84%) Netflix 16.2 MB 7.7 MB (-52%) 1.2 MB (-92%)

Disclaimer: if you see different patch sizes when you press "update" manually, that is because we are not currently using File-by-file for interactive updates, only those done in the background.

Saving data and making our users (& developers!) happy

These changes are designed to ensure our community of over a billion Android users use as little data as possible for regular app updates. The best thing is that as a developer you don't need to do anything. You get these reductions to your update size for free!

If you'd like to know more about File-by-File patching, including the technical details, head over to the Archive Patcher GitHub project where you can find information, including the source code. Yes, File-by-File patching is completely open-source!

As a developer if you're interested in reducing your APK size still further, here are some general tips on reducing APK size.

Categories: Programming

SE-Radio Episode 276: Björn Rabenstein on Site Reliability Engineering

Björn Rabenstein discusses the field of Site Reliability Engineering (SRE) with host Robert Blumen. The term SRE has recently emerged to mean Google’s approach to DevOps. The publication of Google’s book on SRE has brought many of their practices into more public discussion. The interview covers: what is distinct about SRE versus devops; the SRE […]
Categories: Programming

SE-Radio Episode 276: Björn Rabenstein on Site Reliability Engineering

Björn Rabenstein discusses the field of Site Reliability Engineering (SRE) with host Robert Blumen. The term SRE has recently emerged to mean Google’s approach to DevOps. The publication of Google’s book on SRE has brought many of their practices into more public discussion. The interview covers: what is distinct about SRE versus devops; the SRE […]
Categories: Programming

Sponsored Post: Loupe, New York Times, ScaleArc, Aerospike, Scalyr, Gusto, VividCortex, MemSQL, InMemory.Net, Zohocorp

Who's Hiring?
  • The New York Times is looking for a Software Engineer for its Delivery/Site Reliability Engineering team. You will also be a part of a team responsible for building the tools that ensure that the various systems at The New York Times continue to operate in a reliable and efficient manner. Some of the tech we use: Go, Ruby, Bash, AWS, GCP, Terraform, Packer, Docker, Kubernetes, Vault, Consul, Jenkins, Drone. Please send resumes to: technicaljobs@nytimes.com

  • IT Security Engineering. At Gusto we are on a mission to create a world where work empowers a better life. As Gusto's IT Security Engineer you'll shape the future of IT security and compliance. We're looking for a strong IT technical lead to manage security audits and write and implement controls. You'll also focus on our employee, network, and endpoint posture. As Gusto's first IT Security Engineer, you will be able to build the security organization with direct impact to protecting PII and ePHI. Read more and apply here.
Fun and Informative Events
  • Your event here!
Cool Products and Services
  • A note for .NET developers: You know the pain of troubleshooting errors with limited time, limited information, and limited tools. Log management, exception tracking, and monitoring solutions can help, but many of them treat the .NET platform as an afterthought. You should learn about Loupe...Loupe is a .NET logging and monitoring solution made for the .NET platform from day one. It helps you find and fix problems fast by tracking performance metrics, capturing errors in your .NET software, identifying which errors are causing the greatest impact, and pinpointing root causes. Learn more and try it free today.

  • ScaleArc's database load balancing software empowers you to “upgrade your apps” to consumer grade – the never down, always fast experience you get on Google or Amazon. Plus you need the ability to scale easily and anywhere. Find out how ScaleArc has helped companies like yours save thousands, even millions of dollars and valuable resources by eliminating downtime and avoiding app changes to scale. 

  • Scalyr is a lightning-fast log management and operational data platform.  It's a tool (actually, multiple tools) that your entire team will love.  Get visibility into your production issues without juggling multiple tabs and different services -- all of your logs, server metrics and alerts are in your browser and at your fingertips. .  Loved and used by teams at Codecademy, ReturnPath, Grab, and InsideSales. Learn more today or see why Scalyr is a great alternative to Splunk.

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex measures your database servers’ work (queries), not just global counters. If you’re not monitoring query performance at a deep level, you’re missing opportunities to boost availability, turbocharge performance, ship better code faster, and ultimately delight more customers. VividCortex is a next-generation SaaS platform that helps you find and eliminate database performance problems at scale.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network. 

If any of these items interest you there's a full description of each sponsor below...

Categories: Architecture

AMP Cache Updates

Google Code Blog - Tue, 12/06/2016 - 00:39

Posted by John Coiner, Software Engineer

Today we are announcing a change to the domain scheme of the Google AMP Cache. Beginning soon, the Google AMP Cache will serve each site from its own subdomain of https://cdn.ampproject.org. This change will allow content served from the Google AMP Cache to be protected by the fundamental security model of the web: the HTML5 origin.

No immediate changes are required for most publishers of AMP documents. However, to benefit from the additional security, it is recommended that all AMP publishers update their CORS implementation in preparation for the new Google AMP Cache URL scheme. The Google AMP Cache will continue to support existing URLs, but those URLs will eventually redirect to the new URL scheme.

How subdomain names will be created on the Google AMP Cache

The subdomains created by the Google AMP Cache will be human-readable when character limits and technical specs allow, and will closely resemble the publisher's own domain.

When possible, the Google AMP Cache will create each subdomain by first converting the AMP document domain from IDN (punycode) to UTF-8. Every "-" (dash) will be replaced with "--"(2 dashes) and every "." (dot) will be replaced with a "-" (dash). For example, pub.com will map to pub-com.cdn.ampproject.org. Where technical limitations prevent a human readable subdomain, a one-way hash will be used instead.

Updates needed for hosts and service providers with remote endpoints

Due to the changes described above, CORS endpoints will begin seeing requests with new origins. The following updates will be required:

  • Expand request acceptance to the new subdomain: Sites that currently only accept CORS requests from https://cdn.ampproject.organd the publisher's own origins must update their systems to accept requests from https://[pub-com].cdn.ampproject.org, https://cdn.ampproject.org, and the AMP publisher's own origins.
  • Tighten request acceptance for security: Sites that currently accept CORS requests from https://*.ampproject.org as described in the AMP spec, can improve security by restricting acceptance to requests from https://[pub-com].cdn.ampproject.org, https://cdn.ampproject.org, and the AMP publisher's own origins. Support for https://*.ampproject.org is no longer necessary.
  • Support for new subdomain pattern by ads, analytics, and other technology providers: Service providers such as analytics and ads vendors that have a CORS endpoint will also need to ensure that their systems accept requests from the Google AMP Cache's subdomains (e.g.https://ampbyexample-com.cdn.ampproject.org), in addition to their own hosts.
Retrieving the Google AMP Cache URL

For platforms that display AMP documents and serve from the Google AMP Cache, the best way to retrieve Google AMP Cache URLs is to continue using the Google AMP Cache URL API. The Google AMP Cache URL API will be updated in Q1 2017 to return the new cache URL scheme that includes the subdomain.

You can use an interactive tool to find the Google AMP Cache subdomain generated for each site over at ampbyexample.com.

src="https://amp-by-example-api.appspot.com/iframe/amp-url-converter.html?url=https://ampproject.com">
Timing

Google Search is planning to begin using the new URL scheme as soon as possible and is monitoring sites' compatibility. In addition, we will be reaching out to impacted parties, and we will make available a developer testing sandbox prior to launching to ensure a smooth transition.

Categories: Programming

Welcoming Android 7.1.1 Nougat

Android Developers Blog - Mon, 12/05/2016 - 21:06

Posted by Dave Burke, VP of Engineering

Android Nougat

Android 7.1.1 Nougat!

Today we're rolling out an update to Nougat -- Android 7.1.1 for Pixel and Pixel XL devices and the full lineup of supported Nexus devices. We're also pushing the Android 7.1.1 source code to the Android Open Source Project (AOSP) so that device makers can get their hands on the latest version of Android.

With Android 7.1.1 officially on it's way to users, it's a good time to make sure your apps are ready.

What's in Android 7.1.1?

Android 7.1.1 is an incremental release that builds on the features already available on Pixel and Pixel XL devices, adding a handful of new features for consumers as well as optimizations and bug fixes on top of the base Android 7.1 platform (API level 25).

If you haven't explored the developer features, you'll want to take a look at app shortcuts, round icon resources, and image keyboard support, among others -- you can see the full list of developer features here. For details on API Level 25, check out the API diffs and the API reference.

You can find an overview of all of the Android Nougat developer resources here, including details on the core Android 7.0 Nougat behavior changes and developer features.c

Coming to consumer devices soon

We're starting the Android 7.1.1 rollout today, and we expect it to reach all eligible devices over the next several weeks. Pixel and Pixel XL devices will get the over-the-air (OTA) update, as will Nexus 5X, Nexus 6P, Nexus 6, Nexus 9, Nexus Player, Pixel C, and General Mobile 4G (Android One) devices. Devices enrolled in the Android Beta Program will receive the final version as well. As always, you can also download and flash this update manually.

We've also been working with our device manufacturer partners to bring Android 7.1.1 to their devices in the months ahead.

Make sure your apps are ready

Take this opportunity to test your apps for compatibility and optimize them to look their best on Android 7.1.1, such as by providing round icons and adding app shortcuts. We recommend compiling your app with, and ideally targeting, API 25. See our recent post for details.

With the final platform we’re updating the platform and build tools in Android Studio, as well as the API Level 25 emulator system images. The latest version of the support library (25.0.1) is also available for you to add image keyboard support, bottom navigation, and other features for devices running API Level 25 or earlier.

We're also providing downloadable factory and OTA images on the Nexus Images page to help you do final testing on your Pixel and Nexus devices. To help scale your testing, make sure to take advantage of Firebase Test Lab for Android and run your tests in the cloud at no charge through the end of December.

After your final testing, publish your apps to your alpha, beta, or production channels in the Google Play Developer Console.

What's next?

We'll soon be closing open bugs logged against Developer Preview builds, but please keep the feedback coming! If you still see an issue that you filed in the preview tracker, just file a new issue against Android 7.1 in the AOSP issue tracker. You can also continue to give us feedback or ask questions in the developer community.

As mentioned back in August, we've moved Android Nougat into a regular maintenance cycle and we're already started work on refinements and bug fixes for the next incremental update. If you have an eligible device that's currently enrolled in the Android Beta Program, your device will automatically receive preview updates of upcoming Android Nougat releases as soon as they are available. If you don't want to receive those updates, just visit the Beta site and unenroll the device.

Thanks for being part of the developer preview. Let us know how this year's preview met your needs by taking a short survey. Your feedback helps to shape our future releases.

Categories: Programming

The Tech that Turns Each of Us Into a Walled Garden


(source)

 

How we treat each other is based on empathy. Empathy is based on shared experience. What happens when we have nothing in common?

Systems are now being constructed so we’ll never see certain kinds of information. Each of us live in our own algorithmically created Skinner Box /silo/walled garden, fed only information AIs think will be simultaneously most rewarding to you and their creators (Facebook, Google, etc).

We are always being manipulated, granted, but how we are being manipulated has taken a sharp technology driven change and we should be aware of it. This is different. Scary different. And the technology behind it all is absolutely fascinating.

Divided We Are Exploitable
Categories: Architecture

SPaMCAST 420 – John Hunter, Building Organizational Capability

SPaMCAST Logo

http://www.spamcast.net

Listen Now
Subscribe on iTunes
Check out the podcast on Google Play Music

The Software Process and Measurement Cast 420 features our interview with John Hunter.  John is a SPaMCAST alumni; John first appeared on SPaMCAST 226 to talk about why management matters.  In this podcast John returns to discuss building capability in the organization and  understanding the impact of  variation.  We also talked Deming and why people tack the word improvement on almost anything!   

John’s Bio

John Hunter has served as an information technology program manager for the Office of Secretary of Defense Quality Management Office, the White House Military Office and the American Society for Engineering Education.

In 2013, he published his first book – Management Matters: Building Enterprise Capability.

John created and operates one of the first, and still one of the most popular, management resources on the internet.  He continues to aid managers in their efforts to improve their organizations with an emphasis on software development and leveraging the internet.  His blog is widely recognized as a valuable resource for leaders and managers with a focus on improving the practice of management in organizations.

Re-Read Saturday News

In this week’s re-read of The Five Dysfunctions of a Team  by Patrick Lencioni (Jossey-Bass, Copyright 2002, 33rd printing), we tackle the sections titled Accountability, Individual Contributor, and The Talk.  We are getting close to the end of the novel portion of the book but over the next few weeks, we have a number of ideas to extract from the book before we review the model.

(Remember to buy a copy and read along.)  We are well over halfway through this book and I am considering re-reading Carol Dweck’s Mindset next.  What are your thoughts?

Takeaways from this week include:

  • Team members hold other team members accountable.        
  • Be aware of how you affect the people around you or suffer the consequences!
  • Try to step back and reduce the stress when confronted by tough negotiations.

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

Next SPaMCAST

The Software Process and Measurement Cast 421 will feature our essay on vanity metrics.  Vanity metrics make people feel good, but are less useful for making decisions about the business.  The essay discusses how to recognize vanity metrics and the risks of falling prey to their allure.

We will also have columns form Steve Tendon with another chapter in his Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban, published by J Ross (buy a copy here).   Finally, Gene Hughson will anchor the cast with an entry from his Form Follows Function Blog.  

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

SPaMCAST 420 - John Hunter, Building Organizational Capability

Software Process and Measurement Cast - Sun, 12/04/2016 - 23:00

The Software Process and Measurement Cast 420 features our interview with John Hunter.  John is a SPaMCAST alumni; John first appeared on SPaMCAST 226 to talk about why management matters.  In this podcast John returns to discuss building capability in the organization and  understanding the impact of  variation.  We also talked Deming and why people tack the word improvement on almost anything!   

John’s Bio

John Hunter has served as an information technology program manager for the Office of Secretary of Defense Quality Management Office, the White House Military Office and the American Society for Engineering Education.

In 2013, he published his first book - Management Matters: Building Enterprise Capability.

John created and operates one of the first, and still one of the most popular, management resources on the internet.  He continues to aid managers in their efforts to improve their organizations with an emphasis on software development and leveraging the internet.  His blog is widely recognized as a valuable resource for leaders and managers with a focus on improving the practice of management in organizations.

Re-Read Saturday News

In this week’s re-read of The Five Dysfunctions of a Team  by Patrick Lencioni (Jossey-Bass, Copyright 2002, 33rd printing), we tackle the sections titled Accountability, Individual Contributor, and The Talk.  We are getting close to the end of the novel portion of the book but over the next few weeks, we have a number of ideas to extract from the book before we review the model.

(Remember to buy a copy and read along.)  We are well over halfway through this book and I am considering re-reading Carol Dweck’s Mindset next.  What are your thoughts?

Takeaways from this week include:

  • Team members hold other team members accountable.        
  • Be aware of how you affect the people around you or suffer the consequences!
  • Try to step back and reduce the stress when confronted by tough negotiations.

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

Next SPaMCAST

The Software Process and Measurement Cast 421 will feature our essay on vanity metrics.  Vanity metrics make people feel good, but are less useful for making decisions about the business.  The essay discusses how to recognize vanity metrics and the risks of falling prey to their allure.

We will also have columns form Steve Tendon with another chapter in his Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban, published by J Ross (buy a copy here).   Finally, Gene Hughson will anchor the cast with an entry from his Form Follows Function Blog.  

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

Kubernetes: Simulating a network partition

Mark Needham - Sun, 12/04/2016 - 13:37

A couple of weeks ago I wrote a post explaining how to create a Neo4j causal cluster using Kubernetes and … the I wanted to work out how to simulate a network partition which would put the leader on the minority side and force an election.

We’ve done this on our internal tooling on AWS using the iptables command but unfortunately that isn’t available in my container, which only has the utilities provided by BusyBox.

Luckily one of these is route command which will allow us to achieve the same thing.

To recap, I have 3 Neo4j pods up and running:

$ kubectl get pods
NAME      READY     STATUS    RESTARTS   AGE
neo4j-0   1/1       Running   0          6h
neo4j-1   1/1       Running   0          6h
neo4j-2   1/1       Running   0          6h

And we can check that the route command is available:

$ kubectl exec neo4j-0 -- ls -alh /sbin/route 
lrwxrwxrwx    1 root     root          12 Oct 18 18:58 /sbin/route -> /bin/busybox

Let’s have a look what role each server is currently playing:

$ kubectl exec neo4j-0 -- bin/cypher-shell "CALL dbms.cluster.role()"
role
"FOLLOWER"
 
Bye!
$ kubectl exec neo4j-1 -- bin/cypher-shell "CALL dbms.cluster.role()"
role
"FOLLOWER"
 
Bye!
$ kubectl exec neo4j-2 -- bin/cypher-shell "CALL dbms.cluster.role()"
role
"LEADER"
 
Bye!

Slight aside: I’m able to call cypher-shell without a user and password because I’ve disable authorisation by putting the following in conf/neo4j.conf:

dbms.connector.bolt.enabled=true

Back to the network partitioning…we need to partition away neo4j-2 from the other two servers which we can do by running the following commands:

$ kubectl exec neo4j-2 -- route add -host neo4j-0.neo4j.default.svc.cluster.local reject && \
  kubectl exec neo4j-2 -- route add -host neo4j-1.neo4j.default.svc.cluster.local reject && \
  kubectl exec neo4j-0 -- route add -host neo4j-2.neo4j.default.svc.cluster.local reject && \
  kubectl exec neo4j-1 -- route add -host neo4j-2.neo4j.default.svc.cluster.local reject

If we look at the logs of neo4j-2 we can see that it’s stepped down after being disconnected from the other two servers:

$ kubectl exec neo4j-2 -- cat logs/debug.log
...
2016-12-04 11:30:10.186+0000 INFO  [o.n.c.c.c.RaftMachine] Moving to FOLLOWER state after not receiving heartbeat responses in this election timeout period. Heartbeats received: []
...

Who’s taken over as leader?

$ kubectl exec neo4j-0 -- bin/cypher-shell "CALL dbms.cluster.role()"
role
"LEADER"
 
Bye!
$ kubectl exec neo4j-1 -- bin/cypher-shell "CALL dbms.cluster.role()"
role
"FOLLOWER"
 
Bye!
$ kubectl exec neo4j-2 -- bin/cypher-shell "CALL dbms.cluster.role()"
role
"FOLLOWER"
 
Bye!

Looks like neo4j-0! Let’s put some data into the database:

$ kubectl exec neo4j-0 -- bin/cypher-shell "CREATE (:Person {name: 'Mark'})"
Added 1 nodes, Set 1 properties, Added 1 labels
 
Bye!

Let’s check if that node made it to the other two servers. We’d expect it to be on neo4j-1 but not on neo4j-2:

$ kubectl exec neo4j-1 -- bin/cypher-shell "MATCH (p:Person) RETURN p"
p
(:Person {name: "Mark"})
 
Bye!
$ kubectl exec neo4j-2 -- bin/cypher-shell "MATCH (p:Person) RETURN p"
 
 
Bye!

On neo4j-2 we’ll repeatedly see these types of entries in the log as its election timeout triggers but fails to get any responses to the vote requests it sends out:

$ kubectl exec neo4j-2 -- cat logs/debug.log
...
2016-12-04 11:32:56.735+0000 INFO  [o.n.c.c.c.RaftMachine] Election timeout triggered
2016-12-04 11:32:56.736+0000 INFO  [o.n.c.c.c.RaftMachine] Election started with vote request: Vote.Request from MemberId{ca9b954c} {term=11521, candidate=MemberId{ca9b954c}, lastAppended=68, lastLogTerm=11467} and members: [MemberId{484178c4}, MemberId{0acdb8dd}, MemberId{ca9b954c}]
...

We can see those vote requests by looking at the raft-messages.log which can be enabled by setting the following property in conf/neo4j.conf:

causal_clustering.raft_messages_log_enable=true
$ kubectl exec neo4j-2 -- cat logs/raft-messages.log
...
11:33:42.101 -->MemberId{484178c4}: Request: Vote.Request from MemberId{ca9b954c} {term=11537, candidate=MemberId{ca9b954c}, lastAppended=68, lastLogTerm=11467}
11:33:42.102 -->MemberId{0acdb8dd}: Request: Vote.Request from MemberId{ca9b954c} {term=11537, candidate=MemberId{ca9b954c}, lastAppended=68, lastLogTerm=11467}
 
11:33:45.432 -->MemberId{484178c4}: Request: Vote.Request from MemberId{ca9b954c} {term=11538, candidate=MemberId{ca9b954c}, lastAppended=68, lastLogTerm=11467}
11:33:45.433 -->MemberId{0acdb8dd}: Request: Vote.Request from MemberId{ca9b954c} {term=11538, candidate=MemberId{ca9b954c}, lastAppended=68, lastLogTerm=11467}
 
11:33:48.362 -->MemberId{484178c4}: Request: Vote.Request from MemberId{ca9b954c} {term=11539, candidate=MemberId{ca9b954c}, lastAppended=68, lastLogTerm=11467}
11:33:48.362 -->MemberId{0acdb8dd}: Request: Vote.Request from MemberId{ca9b954c} {term=11539, candidate=MemberId{ca9b954c}, lastAppended=68, lastLogTerm=11467}
...

To ‘heal’ the network partition we just need to delete all the commands we ran earlier:

$ kubectl exec neo4j-2 -- route delete neo4j-0.neo4j.default.svc.cluster.local reject && \
  kubectl exec neo4j-2 -- route delete neo4j-1.neo4j.default.svc.cluster.local reject && \
  kubectl exec neo4j-0 -- route delete neo4j-2.neo4j.default.svc.cluster.local reject && \
  kubectl exec neo4j-1 -- route delete neo4j-2.neo4j.default.svc.cluster.local reject

Now let’s check that neo4j-2 now has the node that we created earlier:

$ kubectl exec neo4j-2 -- bin/cypher-shell "MATCH (p:Person) RETURN p"
p
(:Person {name: "Mark"})
 
Bye!

That’s all for now!

Categories: Programming

X-Platform development with Xamarin.Forms & F#

Phil Trelford's Array - Sun, 12/04/2016 - 12:23

This post is part of the F# Advent Calendar in English 2016 series organized by Sergey Tihon.

Last month on November 16 – 17, IDTechEx held their largest annual conference and tradeshow on emerging technologies in California with over 200 speakers and close to 4,000 attendees.

exhibition

The show covers a wide variety of topics including electric vehicles, energy harvesting, energy storage, IoT, graphene, printed electronics, robotics, sensors, AR, VR and wearables. Coming from a software background, I couldn’t help but marvel at the size of the event and the level of genuine innovation and entrepreneurship on show. If you’re based in Europe I’d highly recommend attending the Berlin event in May 2017.

IDTechEx wanted an app where users can easily browse information, including research, journals and upcoming webinars, when they’re out and about, even in a WiFi dead spot. The app needed to be released before the show in California so that at a minimum users could download and browse the conference agenda, list of exhibitors and easily find the venue on a map. Xamarin.Forms and F# were successfully used to build the app which was published in the iTunes and Google Play stores a week before the conference, and well received by attendees:

This post will talk about some of the development process and some of the design decisions for this app.

Xamarin.Forms

IDTechEx wanted a native business app that targeted the vast majority of mobile users which concretely meant iOS and Android support. Xamarin.Forms was chosen as it allows business style apps to be built for both iOS and Android where most of the code can be shared. It is hoped that over the life of the product that the use of Xamarin.Forms should help reduce maintenance costs and improve time-to-market.

Check out Xamarin’s pre-built examples for sample code and to see what can be done: https://www.xamarin.com/prebuilt

Why F#

Xamarin’s ecosystem, including Xamarin.Forms, supports both the C# and F# programming languages. F# is a pragmatic, low ceremony, code-oriented programming language, and the preferred language choice for IDTechEx and it’s development staff.

If you’re new to F# and interested in learning more, I’d recommend checking out these resources:

Creating an F# cross platform Xamarin.Forms solution

To get started I followed Charles Petzold’s detailed article on Writing Xamarin.Forms Apps with F#, where the steps for creating a cross platform solution can be briefly summarised as:

  • Make sure you’re on the latest and greatest stable version of Xamarin (and Visual Studio if you’re on Windows)
  • Create a C#/.Net >> Cross Platform >> Blank App (Xamarin.Form Portable)
  • Replace the C# Portable library with an F# one

Development environments

During development I used 2 environments off of the same code base:

  • Visual Studio 2015 and Windows for testing and deploying the Android version
  • Xamarin Studio and Mac for testing and deploying the iOS version

Switching between Xamarin Studio and Visual Studio on the respective operating systems was seamless.

Build one to throw away

Initially I built a couple of very simple prototype apps using only a a subset of the data, one using XAML and data binding and another using just code. After some playing I came down on the side of the code only approach.

Xamarin.Forms is quite close to WPF, Silverlight and Windows Store application development. I built a working skeleton/prototype as an F# script using WPF and the F# REPL which gives incredibly fast feedback during development.

Once the flow and views of the application were better defined, and I’d received early feedback from users, I threw away the prototype and rebuilt the app fully in Xamarin.Forms.

Note: the plan to throw one away referenced in this section heading refers to Fred Brooks Jr’s suggestion in his seminal book The Mythical Man Month.

Forms DSL

During development in both WPF and Xamarin.Forms I built up a minimal domain-specific language to simplify form building, for example:

    let label text = Label(Text=text)

    let bold (label:Label) = label.FontAttributes <- FontAttributes.Bold; label

    let italic (label:Label) = label.FontAttributes <- FontAttributes.Italic; label

    let namedSize name (label:Label) = label.FontSize <- Device.GetNamedSize(name, typeof<Label>); label

    let micro label = label |> namedSize NamedSize.Micro

    let small label = label |> namedSize NamedSize.Small

    let medium label = label |> namedSize NamedSize.Medium

    let large label = label |> namedSize NamedSize.Large

 

With this the exhibitor page could be written as:

    let exhibitorPage (exhibitor:json) =

        let program = exhibitor?memberOf?programName.AsString()

        let layout =

            StackLayoutEx(VerticalOptions = LayoutOptions.FillAndExpand, Padding=Thickness 10.0)

            + (cachedImage (exhibitor?logo.AsString(), 130.0, 78.0))

            + (exhibitor?name.AsString() |> label |> bold |> medium)

            + (program |> label |> color (programColor program))

            + (exhibitor?location?name.AsString() |> label)

            + ("Company Profile" |> label |> bold)

            + (exhibitor?description.AsString() |> label)

        let url = exhibitor?url.AsString()

        if not <| System.String.IsNullOrEmpty(url)

        then

            let action () = Device.OpenUri(System.Uri(url))

            Button(Text="Website", Command=Command(action)) |> layout.Children.Add

        ContentPage(Title="Exhibitor", Content=layout)

 

Below is a screen shot from the Android App of a exhibitor page generated by the code above:

 

 image

 

Lists

For lists of items, for example exhibitors or presentations, I used Xamarin.Form’s built-in ListView control and the ImageCell and TextCell, along with a custom cell providing more subheadings.

The screen shot below is from the iOS version and shows a list of presentations using a custom cell:

 

presentations

 

Other Libraries

For image loading I used the excellent FFImageLoading which can cache image files on the user’s device.

To show a map for the venue I used Xamarin.Forms.Maps control:

iPhone Screenshot 5

Summary

Using Xamarin.Forms and F# allowed be to successfully build and publish a cross platform mobile app targeting iOS and Android in a short time frame. F# allowed fast prototyping and quick feedback while building the app. Xamarin.Forms worked well and meant that almost all of the code was shared between the platforms. I’d whole heartedly recommend this approach for similar business applications and we plan to extend the app heavily in future releases.

Categories: Programming

fck: Fake Construction Kit

Yeah it's christmas time again, and santa's elves are quite busy.

And when I say busy, I don't mean:

Santa's elves

I mean busy like this:

Santa's elves

So they decided to build some automation productivity tools, and they choose Santa's favorite language to do the job:

F# of course !

F# scripting

No body would seriously use a compiled language for automation tools. Requiring compilation or a CI server for this kind of things usually kills motivation.

Of course it is possible to write bash/batch files but the syntax if fugly once you start to make more advanced tools.

Python, JavaScript, Ruby or PowerShell are cool, but you end up as often with scripted languages with dynamic typing which you'll come to regret when you have to maintain it on the long term.

F# is a staticaly typed language that can be easily scripted. Type inference make it feel like shorter JavaScript but with far higher safety !

Writing F# script is easy and fast. Test it from the command line:

1: 
vim test.fsx

Then write:

1: 
printfn "Merry Christmas !"

press :q to exit

now launch it on linux with:

1: 
fsharpi --exec test.fsx

or on windows:

1: 
fsianycpu --exec test.fsx

Excellent.

The only problem is that typing the fshapi --exec this is a bit tedious.

Bash/Batch to the rescue

We can create a bash/batch script to puth in the path that will launch the script (for linux):

1: 
vim test

1: 
fsharpi --exec test.fsx

1: 
chmod +x test

or one windows

1: 
vim test.cmd

1: 
fsianycpu --exec test.fsx    

Done !

Better, but now we need to write a bash and/or a batch script for each F# script.

fck bash/batch dispatcher FTW !

We create a fck file (don't forget to chmod +x it) that takes a command

 1: 
 2: 
 3: 
 4: 
 5: 
 6: 
 7: 
 8: 
 9: 
10: 
11: 
12: 
13: 
14: 
15: 
16: 
17: 
18: 
19: 
20: 
21: 
22: 
23: 
24: 
25: 
26: 
27: 
28: 
29: 
30: 
31: 
32: 
33: 
34: 
35: 
36: 
#!/usr/bin/env bash


# fck tool path
fckpath=$(readlink -f "$0")
# fck tool dir
dir=$(dirname $fckpath)
script="$dir/fck-cmd/fck-$1.fsx"
shell="$dir/fck-cmd/fck-$1.sh"
cmd="$1"
shift

# packages if needed
if [ ! -d "$dir/fck-cmd/packages" ]
then
pushd "$dir/fck-cmd" > /dev/null
    mono "$dir/fck-cmd/.paket/paket.bootstrapper.exe" --run restore
popd > /dev/null
fi

# script command if it exists
if [ -e $script ]
then
    mono "$dir/fck-cmd/packages/FAKE/tools/FAKE.exe" "$script" -- $@

# shell command if it exists
elif [ -e $shell ]
then
    eval $shell $@

# help
else
pushd "$dir/fck-cmd" > /dev/null
    mono "$dir/fck-cmd/packages/FAKE/tools/FAKE.exe" "$dir/fck-cmd/fck-help.fsx" -- $cmd $@
popd > /dev/null
fi

and the batch version:

 1: 
 2: 
 3: 
 4: 
 5: 
 6: 
 7: 
 8: 
 9: 
10: 
11: 
12: 
13: 
14: 
15: 
16: 
17: 
18: 
19: 
20: 
21: 
22: 
23: 
24: 
25: 
26: 
27: 
28: 
29: 
30: 
31: 
32: 
33: 
34: 
@echo off
set encoding=utf-8

set dir=%~dp0
set cmd=%1
set script="%dir%\fck-cmd\fck-%cmd%.fsx"
set batch="%dir%\fck-cmd\fck-%cmd%.cmd"
shift

set "args="
:parse
if "%~1" neq "" (
  set args=%args% %1
  shift
  goto :parse
)
if defined args set args=%args:~1%


if not exist "%dir%\fck-cmd\packages" (
pushd "%dir%\fck-cmd\"
"%dir%
popd 
)

if exist  "%script%" (
"%dir%/fck-cmd/packages/fake/tools/fake.exe" "%script%" -- %args%
) else if exist "%batch%" (
pushd "%dir%\fck-cmd\"
"%batch%" %cmd% %*
popd
) else (
"%dir%/fck-cmd/packages/fake/tools/fake.exe" "%dir%
)

Forget the paket part for now.

The bash take a command argument, and check whether a fck-cmd/fck-$cmd.fsx file exists. If it does, run it ! It also works with shell scripts name fck-$cmd.sh or batch scripts fck-$cmd.cmd to integrate quickly with existing tools.

Fake for faster startups

When F# scripts start to grow big, especially with things like Json or Xml type providers, load time can start to raise above acceptable limits for a cli.

Using Fake to launch scripts takes adventage of it's compilation cache. We get the best of both world:

  • scriptability for quick changes and easy deployment
  • automaticly cached jit compilation for fast startup and execution

We could have written all commands in a single fsx file and pattern maching on the command name, but once we start to have more commands, the script becomes bigger and compilation longer. The problem is also that the pattern matching becomes a friction point in the source control.

FckLib

At some point we have recuring code in the tools. So we can create helper scripts that will be included by command scripts.

For instance parsing the command line is often useful so I created a helper:

 1: 
 2: 
 3: 
 4: 
 5: 
 6: 
 7: 
 8: 
 9: 
10: 
11: 
12: 
13: 
14: 
15: 
16: 
17: 
18: 
// culture invariant, case insensitive string comparison
let (==) x y = String.Equals(x,y, StringComparison.InvariantCultureIgnoreCase)

open System.Xml.Linq

module CommandLine =
    // get the command line, fck style...
    let getCommandLine() = 
        System.Environment.GetCommandLineArgs() 
        |> Array.toList
        |> List.skipWhile ((<>) "--")
        |> List.tail

    // check whether the command line starts with specified command
    let (|Cmd|_|) str cmdLine =
        match cmdLine with
        | s :: _ when s == str -> Some()
        | _ -> None 

We use the -- to delimit arguments reserved for the script. Since Fake is used to launch scripts, we can also include FakeLib for all the fantastic helpers it contains.

Here is a sample fck-cmd/fck-hello.fsx script that can write hello.

It uses FakeLib for the tracefn function and FckLib for getCommandLine.

You can call it with (once fck is in your Path environment variable):

1: 
fck hello Santa

Help

A tool without help is just a nightmare, and writing help should be easy.

The last part of fck bash script lanch the fck-help.fsx script:

This script tries to find a fck-xxx.txt file and display it, or fallbacks to fck-help.txt.

For exemple the help for our fck hello command will be in fck-hello.txt:

1: 
2: 
3: 
4: 
Usage:
fck hello [<name>]

Display a friendly message to <name> or to you if <name> is omitted.

Of course we can the pimp the fck-help.fsx to parse the txt help files and add codes for colors, verbosity etc.

Deployment

Deployment is really easy. We can clone the git repository, and add it to $PATH.

Run the commands, it will automatically restore packages if missing, and lanch the script.

To upgrade to a new version, call fck update, defined in fck-update.sh :

1: 
2: 
3: 
4: 
5: 
6: 
7: 
script=$(readlink -f "$0")
dir=$(dirname $script)

pushd "$dir" > /dev/null
git pull
mono "$dir/.paket/paket.bootstrapper.exe" --run restore
popd > /dev/null

or batch fck-update.cmd:

1: 
2: 
git pull
.paket

Yep, that's that easy

Happy Christmas

Using Santa's elves tools, I hope you won't be stuck at work on xmas eve ! Enjoy !

The full source is on github

val printfn : format:Printf.TextWriterFormat<'T> -> 'T

Full name: Microsoft.FSharp.Core.ExtraTopLevelOperators.printfn val set : elements:seq<'T> -> Set<'T> (requires comparison)

Full name: Microsoft.FSharp.Core.ExtraTopLevelOperators.set val not : value:bool -> bool

Full name: Microsoft.FSharp.Core.Operators.not
Categories: Architecture, Requirements

Five Dysfunctions of a Team, Patrick Lencioni: Re-Read Week 10

The Five Dysfunctions of a Team Cover

The “Book” during unboxing!

In this week’s re-read of The Five Dysfunctions of a Team  by Patrick Lencioni (Jossey-Bass, Copyright 2002, 33rd printing), we tackle the sections titled Accountability, Individual Contributor, and The Talk.  We are getting close to the end of the novel portion of the book, but over the next few weeks we have a number of ideas to extract from the book before we review the model.

(Remember to buy a copy and read along.)  We are well over halfway through this book and I am considering re-reading Carol Dweck’s Mindset next.  What are your thoughts?

Accountability

The second off-site continued with a discussion immediately turned began with a review of progress toward the teams 18 deal (sales) goal.  Lencioni uses the 18 deal goal to illustrate developing a measurable goal and how the team holds itself accountable.  As a reminder, the four key drivers the team had agreed upon in the first off-site were: product demonstrations, competitive analysis, sales training, and product brochures.  Martin reported that product demonstrations were ahead of schedule partially becasue Carlos had pitched in to help Martin. Carlos’s chipping in had the unintended consequence  of contributing to the competitor analysis that Carlos was leading being behind schedule. The competitor analysis was also behind because Carlos had not gotten support from Nick’s people.  This detail is important to illustrate two issues.  The first is that Carlos had not gone to Nick to talk about the getting the needed support.  Carlos had not engaged to hold Nick accountable.  Secondly, no one had actually even challenged Carlos about the progress he was making on the competitor analysis. Carlos and the team had fallen down on accountability.

Lencioni (using Kathryn’s voice) states that there are three reasons it is difficult to hold people accountable.

  1.      Some people are just generally helpful,
  2.      Some get defensive, or
  3.      Some are intimidating.

There are probably other reasons it is difficult to hold or be held accountable.  Accountability is intertwined with the concept of trust.  Without accountability, it is difficult to trust.  Holding someone accountable does not represent a lack of trust, but rather a signal of a trust that team members push to make the team better.

As this section concludes, Mikey holds herself out as better than the team and only sarcastically goes along with the decision for everyone to attend sales training (note: once upon a time I might have been this person).

As a team, holding each other accountable for the actions and activities that we’ve agreed to do is critical for the health of the team.  Teams that don’t have enough trust to be willing to hold each other accountable means that it’s very difficult to make progress as a team.

Individual contributor

The fourth driver of DecsionTech’s 18 deal goals,  new product brochures, was the next topic. Mikey proudly produced mockups of the brochures from her bag and announced they were going to print next week. A train wreck ensued. Nick was uncomfortable because his people have been doing research and no one had talked to them. Mikey, as the marketing lead, had struck out on her own without consulting and interacting with a team.  Her opinion was more important than that of the team. BOOM.  Kathryn called for a long break and dismissed everyone except Mikey.

Individuals need participate and integrate into the team.  Attributes such as humility and working well in a team, the ability to accept criticism and then work in a manner that allows others to have input are required to work in a team.  While individual contributors are important they are generally not the right people for an effective team.

The Talk

Mikey did not seem to see the end coming.  Mikey was not aware of her impact on the team.  Mikey’s reaction to Kathryn’s comment “I don’t think you are fit for this team” indicated she did not understand her impact on the team.

Throughout the story, Lencioni paints a picture of Mikey as the person that cuts herself off, eye rolls when statements are made that she doesn’t believe without getting involved in the discussion and in generally acts as a motivation heatsink. Mikey only really respected herself.  As the talk progressed, Mikey turned to veiled threats to deflect Kathryn’s decision (a form of frustration on the Kubler-Ross change curve).  In the end though Kathryn felt that Mikey was coming to terms with the situation, but she was wrong.  Another Lencioni cliffhanger.

“Talks” like these are a form of negotiation.  In these circumstances, unless both parties see the event coming, one party will tend to have less information or power than the other.  When a similar situation occurred between Kathryn and Nick, Nick was successfully able to delay the decision so that he could reduce the stress of the situation and help even the power balance.  Saying yes immediately in this type of negotiations probably isn’t a good idea.  Let things sink in and then even if you can say yes immediately don’t. 

Three quick takeaways:

  • Team members hold other team members accountable.        
  • Be aware of how you affect the people around you or suffer the consequences!
  • Try to step back and reduce the stress when confronted by tough negotiations.

Previous Installments in the re-read of  The Five Dysfunctions of a Team by Patrick Lencioni:

Week 1 – Introduction through Observations

Week 2 – The Staff through the End Run

Week 3 – Drawing the Line though Pushing Back

Week 4 – Entering Danger though Rebound

Week 5 – Awareness through Goals

Week 6 – Deep Tissue through Exhibition

Week 7 – Film Noir through Application

Week 8 – On-site through Fireworks

Week 9 – Leaks through Plowing On


Categories: Process Management

Stuff The Internet Says On Scalability For December 2nd, 2016

Hey, it's HighScalability time:

 

A phrase you've probably heard a lot this week: AWS announces...

 

If you like this sort of Stuff then please support me on Patreon.
  • 18 minutes: latency to Mars; 100TB: biggest dynamodb table; 55M: visits to Kaiser were virtual; $2 Billion: yearly Uber losses; 91%: Apple's take of smartphone profits; 825: AI patents held by IBM; $8: hourly cost of a spot welding in the auto industry; 70%: Walmart website traffic was mobile; $3 billion: online black friday sales; 80%: IT jobs replaceable by automation; $7500: cost of the one terabit per second DDoS attack on Dyn; 

  • Quotable Quotes:
    • @BotmetricHQ: #AWS is deploying tens of thousands of servers every day, enough to power #Amazon in 2005 when it was a $8.5B Enterprise. #reInvent
    • bcantrill: From my perspective, if this rumor is true, it's a relief. Solaris died the moment that they made the source proprietary -- a decision so incredibly stupid that it still makes my head hurt six years later.
    • Dropbox: it can take up to 180 milliseconds for data traveling by undersea cables at nearly the speed of light to cross the Pacific Ocean. Data traveling across the Atlantic can take up to 90 milliseconds.
    • @James_R_Holmes: The AWS development cycle: 1) Have fun writing code for a few months 2) Delete and use new AWS service that replaces it
    • @swardley: * asked "Can Amazon be beaten?" Me : of course * : how? Me : ask your CEO * : they are asking Me : have you thought about working at Amazon?
    • @etherealmind: Whatever network vendors did to James Hamilton at AWS, he is NEVER going to forgive them.
    • Stratechery: the flexibility and modularity of AWS is the chief reason why it crushed Google’s initial cloud offering, Google App Engine, which launched back in 2008. Using App Engine entailed accepting a lot of decisions that Google made on your behalf; AWS let you build exactly what you needed.
    • @jbeda: AWS Lambda@Edge thing is huge. It is the evolution of the CDN. We'll see this until there are 100s of DCs available to users.
    • erikpukinskis: Everyone in this subthread is missing the point of open source industrial equipment. The point is not to get a cheap tractor, or even a good one. The point is not to have a tractor you can service. The point is to have a shared platform.
    • John Furrier: Mark my words, if Amazon does not start thinking about the open-source equation, they could see a revolt that no one’s ever seen before in the tech industry. If you’re using open source to build a company to take territory from others, there will be a revolt.
    • @toddtauber: As we've become more sophisticated at quantifying things, we've become less willing to take risks. via @asymco
    • Resilience Thinking: Being efficient, in a narrow sense, leads to elimination of redundancies-keeping only those things that are directly and immediately beneficial. We will show later that this kind of efficiency leads to drastic losses in resilience.
    • Connor Gibson: By placing advertisements around the outside of your game (in the header, footer and sidebars) as well as the possibility video overlays it is entirely possible to earn up to six figures through this platform.
    • Google Analytics: And maybe, if nothing else, I guess it suggests that despite the soup du jour — huge seed/A rounds, massive valuations, binary outcomes— you can sometimes do alright by just taking less money and more time.
    • badger_bodger: I'm starting to get Frontend Fatigue Fatigue.
    • Steve Yegge: But now, thanks to Moore's Law, even your wearable Android or iOS watch has gigs of storage and a phat CPU, so all the decisions they made turned out in retrospect to be overly conservative.  And as a result, the Android APIs and frameworks are far, far, FAR from what you would expect if you've come from literally any other UI framework on the planet.  They feel alien. 
    • David Rosenthal: Again we see that expensive operations with cheap requests create a vulnerability that requires mitigation. In this case rate limiting the ICMP type 3 code 3 packets that get checked is perhaps the best that can be done.
    • @IAmOnDemand: Private on public cloud means the you can burst public/private workloads intothe public and shut down yr premise or... #reinvent
    • @allingeek: It isn’t “serverless" if you own the server/device. It is just a functional programing framework. #reinvent
    • brilliantcode: If you told me to use Azure two years ago I would've laughed you out of the room. But here I am in 2016, using Azure, using ASP.net + IIS on Visual Studio. that's some powerful shit and currently AWS has cost leadership and perceived switching cost as their edge.
    • seregine: Having worked at both places for ~4 years each, I would say Amazon is much more of a product company, and a platform is really a collection of compelling products. Amazon really puts customers first...Google really puts ideas (or technology) first.
    • api: Amazon seems to be trying to build a 100% proprietary global mainframe that runs everywhere.
    • Athas: No, it [Erlang] does not use SIMD to any great extent. Erlang uses message passing, not data parallelism. Erlang is for concurrency, not parallelism, so it would benefit little from these kinds of massively parallel hardware.
    • @chuhnk: @adrianco @cloud_opinion funnily those of us who've built platforms at various startups now think a cloud provider is the best place to be.
    • @jbeda: So the guy now in charge of building OSS communities at @awscloud says you should just join Amazon? Communities are built on diversity.
    • @JoeEmison: There's also an aspect of some of these AWS services where they only exist because of problems with other AWS services.
    • logmeout: Until bandwidth pricing is fixed rather than nickel and dimeing us to death; a lot of us will choose fixed pricing alternatives to AWS, GCP and Rackspace.
    • arcticfox: 100%. I can't stand it [AWS]. It's unlimited liability for anyone that uses their service with no way to limit it. If you were able to set hard caps, you could have set yours at like $5 or even $0 (free tier) and never run into that.
    • @edw519: I hate batch processing so much that I won't even use the dishwasher. I just wash, dry, and put away real time.
    • @CodeBeard: it could be argued that games is the last real software industry. Libraries have reduced most business-useful code to glue.
    • Gall's Law: A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.
    • @mathewlodge: AWS now also designing its own ASICs for networking #Reinvent
    • @giano: From instances to services, AWS better than anybody else understood that use case specific wins over general purpose every day. #reinvent
    • @ben11kehoe: AWS hitting breadth of capability hard. Good counterpoint to recent "Google is 50% cheaper" news #reinvent
    • Michael E. Smith: But there are also positive effects of energized crowding. Urban economists and economic geographers have known for a long time that when businesses and industries concentrate themselves in cities, it leads to economies of scale and thus major gains in productivity. These effects are called agglomeration effects.
    • Andrew Huang: The inevitable slowdown of Moore’s Law may spell trouble for today’s technology giants, but it also creates an opportunity for the fledgling open-hardware movement to grow into something that potentially could be very big. 
    • Stratechery: This is Google’s bet when it comes to the enterprise cloud: open-sourcing Kubernetes was Google’s attempt to effectively build a browser on top of cloud infrastructure and thus decrease switching costs; the company’s equivalent of Google Search will be machine learning.

  • Just what has Amazon been up to?

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture