Delightful Testing, Joyful Integration

From DevSummit
Revision as of 17:21, 5 May 2015 by Vivian (talk | contribs) (1 revision imported)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Infographic of the Process

(Crummy first cut at the notes)

Joyful testing/CI/Deployment

Ben, David, Kathryn, Jack, Mike, David

Brain dump: Theory of testing

Existing project -> Integration tests: PHP: Beehat, Ruby: Rspec/feature specs    Goes through your app like it's a user: create context/user, log in iOS: Instruments: lets you script device key presses end-to-end tests

-> Unit tests: "test-driven development" Ben thinks it's fascist Isolated, break it down into a small piece of functionality

Once you have tests - put them into Continuous Integration: Automatically run these tests when you commit code Example: Github Travis-CI.org - free for open source. A server that listens For private repos: CircleCI - not free for open source, but more affordable for private repos CircleCI if you have the budget, it's 2nd-gen, Travis-CI is 1st gen Hosted CI as a service

It's joyful because you don't think about it

CI will notify you, will identify who busted the code

Continuous Deployment - push code now to your server OS 10 server xCode Travis now has Mac OSX build systems too

You can write scripts to automate your deployment Scripts, e.g. for Drupal can do drush scripts

Accountability and empowerment

Watchers: In Ruby: Guard Sentry CodeKit Run from the command line Watches your code directory for changes, will run your unit tests Can rely on naming conventions

Watchers run in your own development, not in the deployment process

You don't have to think about it

Q. When using Travis, so you have to use their baked-in deployment stuff or can you use Q. You can use your own scripts e.g. Fabric/ fab scripts

Selenium, Phantom, Slimer (for Gecko), 

Q. Is there a middle ground between strict test-driven development A. Sometimes you have to ramp up while things are still in flux and you don't know how to test yet


If you have Continuous Integration, there's test coverage Test coverage, static analysis Coveralls, Code Climate Metrics: How many lines of code, or % of functions covered It also reports which  Python: Flake-8, Code smells, Style standard for Python is called Pep-8 Code style,  Duplicate functions - code smell

The tools give you immediate feedback

Software as a service $20/month - worth it

Q. How about benchmarking/profiling A. Benchmarking New Relic - free for a lot of stuff, but just spend the freaking money Airbrake - 1st gen, Sentry - 2nd gen

iOS - Criticism If your website throws an error it will email you Will give you an initial email notice, plus a digest

Logger/logging - most useful for 404s

Q. How do you justify/estimate the effort and get buy-in? A. Lie to them You can pay a heavy penalty - lose 100k signups, mission critical Scare tactics - not working

Heroku No direct push deployment - always push through these tools

Assertion layer

Q. How to do integration testing when 3rd-party services are involved (Facebook, donation processors) A. Create multiple accounts - 4x as many - use tokens/environment variables, fake CC numbers (but not all services do this - Global Collect)

Q. ActionKit testing client-side, JS stuff Liquid for templating? Django A. Also do API call afterwards, to check if the data got written back to DB

Week-long sprint approach - break

Puppet - to set up a new server