帮助我改进我的持续部署工作流程

我一直在开发一个工作流程,用于练习大部分自动化的.上面的工作流图灵感来自 Android OS 项目.

解决方案

有多少人在做这个?如果您只有 10 或 20 名开发人员,我不确定实施如此精细的工作流程是否有意义.如果您管理 500 个,当然...

我个人的感觉是KISS.保持简单,愚蠢...您需要一个既高效又更重要的过程:简单.如果它很复杂,要么没有人会做对,要么经过一段时间后,零件就会滑倒.如果你把它简单化,它将成为第二天性,几周后没有人会质疑这个过程(好吧,无论如何它的语义)......

另外一个个人感觉是始终运行所有的 UNIT 测试.这样,您可以跳过流程图中的整个决策树.毕竟,更昂贵的是几分钟的 CPU 时间,或者大脑循环来理解部分测试通过和大规模测试失败之间的区别.请记住,失败就是失败,没有任何实际理由应该向有可能导致构建失败的审阅者展示代码.

现在,Selenium 测试通常非常昂贵,所以我可能会同意将这些测试推迟到审阅者批准之后.但是你需要考虑那个......

哦,如果我正在实施这个,我会在那里放置一个正式的 QC 阶段.我希望人类测试人员查看正在进行的任何更改.是的,Selenium 可以验证你知道的东西,但只有人类才能找到你没想到的东西.将他们的发现反馈到新的 Selenium 和集成测试中以防止回归...

I've been developing a workflow for practicing a mostly automated continuous deployment cycle for a PHP project. I'd like some feedback on possible process or technical bottlenecks in this workflow, suggestions for improvement, and ideas for how to better automate and increase the ease-of-use for my team.


Core components:

  • Hudson CI server
  • Git and GitHub
  • PHPUnit unit tests
  • Selenium RC
  • Sauce OnDemand for automated, cross-browser, cloud testing with Selenium RC
  • Puppet for automating test server deployments
  • Gerrit for Git code review
  • Gerrit Trigger for Hudson

EDIT: I've changed the workflow graphic to take ircmaxwell's contributions into account by: removing PHPUnit's extension for Selenium RC and running those tests only as part of the QC stage; adding a QC stage; moving UI testing after code review but before merges; moving merges after the QC stage; moving deployment after the merge.

This workflow graphic describes the process. My questions / thoughts / concerns follow.

My concerns / thoughts / questions:

  • Overall difficulty using this system.

  • Time involvement.

  • Difficulty employing Gerrit.

  • Difficulty employing Puppet.

  • We'll be deploying on Amazon EC2 instances later. If we're going about setting up Debian packages with Puppet and deploying to Linode slices now, is there a potential for a working deployment on Linode to break on EC2? Should we instead be doing our builds and deployments on EC2 from the get-go?

  • Another question re: EC2 and Puppet. We're also considering using Scalr as a solution. Would it make as much sense to avoid the overhead of Puppet for this alone and invest in Scalr instead? I have a secondary (ha!) concern here about cost; the Selenium tests shouldn't be running that often that EC2 build instances will be running 24/7, but for something like a five-minute build, paying for an hour of EC2 usage seems a bit much.

  • Possible process bottlenecks on merges.

  • Could "A" be moved?

Credits: Portions of this workflow are inspired by Digg's awesome post on continuous deployment. The workflow graphic above is inspired by the Android OS Project.

解决方案

How many people are working on it? If you only have maybe 10 or 20 developers, I'm not sure it will make sense to put such an elaborate workflow into place. If you're managing 500, sure...

My personal feeling is KISS. Keep It Simple, Stupid... You want a process that's both efficient, and more important: simple. If it's complicated, either nobody is going to do it right, or after time parts will slip. If you make it simple, it will become second nature and after a few weeks nobody would question the process (Well, the semantics of it anyway)...

And the other personal feeling is always run all of your UNIT tests. That way, you can skip a whole decision tree in your flow chart. After all, what's more expensive, a few minutes of CPU time, or the brain cycles to understand the difference between the partial test passing and the massive test failing. Remember, a fail is a fail, and there's no practical reason that code should ever be shown to a reviewer that has the potential to fail the build.

Now, Selenium tests are typically quite expensive, so I might agree to push those off until after the reviewer approves. But you'll need to think about that one...

Oh, and if I was implementing this, I would put a formal QC stage in there. I want human testers to look at any changes that are being made. Yes, Selenium can verify the things you know about, but only a human can find things you didn't think of. Feed back their findings into new Selenium and Integration tests to prevent regressions...

相关文章