问题
I have a bit of a design question and I'm curious if other users have run into this and how to develop the most elegant solution. I have some integration type feature tests in Cucumber using Capybara/Celerity/Selenium. A portion of these tests drive an external website to see how well my resources integrate (cookies, Javascript, etcetera).
One particular external site is undergoing their own heavy A/B testing with a similar but significantly different site design, so my tests fail with the new and work with the old, and thus only pass about 50% of the time. Obviously re-writing my tests for the new is pointless, as these will still fail the other 50%.
I've tried paramaterizing my functions in a manner like:
Given I visit the new site
And I click on the link that is labeled "new link text"
Given I visit the old site
And I click on the link that is labeled "old link text"
...
When /^I visit the (old|new) site$/ do |version|
version_url = "http://www.example.com/?backdoorversionparam=0"
if version == "new"
version_url = "http://www.example.com/?backdoorversionparam=1"
end
When "I go to the page \"#{version_url}\""
end
This still requires I write two essentially identical, fairly large feature files (but with all the parameters changed). I'm going to just have to deprecate the old one later, and then all my code is complex. Is there a way to fall back to an alternate test if one fails? What's the most elegant way to design this knowing the page design test is going to proceed for a month or so, and the whole site might change again in another year?
来源:https://stackoverflow.com/questions/12078675/writing-integration-tests-against-external-resources-which-are-a-b-testing