BDD is an "outside-in" methodology, which as I understand it, means you start with what you know. You write your stories and scenarios, and then implement the outermost domain objects, moving "inwards" and "deliberately" discovering collaborators as you go--down through service layers, domain layers, etc. For a collaborator that doesn't exist yet, you mock it (or "fake it") until you make it. (I'm stealing some of these terms straight from Dan North and Kent Beck).
So, how does a UI fit into this?
Borrowing from one of North's blog entries, he rewrites this:
Given an unauthenticated user
When the user tries to navigate to the welcome page
Then they should be redirected to the login page
When the user enters a valid name in the Name field
And the user enters the corresponding password in the Password field
And the user presses the Login button
Then they should be directed to the welcome page
into this:
Given an unauthenticated user
When the user tries to access a restricted asset
Then they should be directed to a login page
When the user submits valid credentials
Then they should be redirected back to the restricted content
He does this to eliminate language from non-relevant domains, one of which is the UI ("Name field", "Password field", "Login button"). Now the UI can change and the story (or rather, the story's intent) doesn't break.
So when I write the implementation for this story, do I use the UI or not? Is it better to fire up a browser and execute "the user submits valid credentials" via a Selenium test, or to connect to the underlying implementation directly (such as an authentication service)? BTW, I'm using jBehave as my BDD framework, but it could just as easily be Cucumber, rSpec, or a number of others.
I tend not to test UI in an automated fashion, and I'm cautious of GUI automation tools like Selenium because I think the tests (1) can be overly brittle and (2) get run where the cost of execution is the greatest. So my inclination is to manually test the UI for aesthetics and usability and leave the business logic to lower, more easily automatible layers. (And possibly layers less likely to change.)
But I'm open to being converted on this. So, is BDD for UI or not?
PS. I have read all the posts on SO I could find on this topic, and none really address my question. This one gets closest, but I'm not talking about separating the UI into a separate story; rather, I'm talking about ignoring it entirely for the purposes of BDD.
Most people who use automated BDD tools use it at the UI layer. I've seen a few teams take it to the next layer down - the controller or presenter layer - because their UI changes too frequently. One team automated from the UI on their customer-facing site and from the controller on the admin site, since if something was broken they could easily fix it.
Mostly BDD is designed to help you have clear, unambiguous conversations with your stakeholders (or to help you discover the places where ambiguity still exists!) and carry the language into the code. The conversations are much more important than the tools.
If you use the language that the business use when writing your steps, and keep them at a high level as Dan suggests, they should be far less brittle and more easily maintainable. These scenarios aren't really tests; they're examples of how you're going to use the system, which you can use in conversation, and which give you tests as a nice by-product. Having the conversations around the examples is more important than the automation, whichever level you do it at.
I'd say, if your UI is stable, give automation a try, and if it doesn't work for you, either drop to a lower level or ensure you've got sufficient manual testing. If you're testing aesthetics anyway that will help (and never, ever use automation to test aesthetics!) If your UI is not stable, don't automate it - you're just adding commitment to something that you know is probably going to change, and automation in that case will make it harder.
I'm new to BDD myself, but I found the cuke4ninja site to help in this regard. What they suggest (my interpretation) is you have your step definitions which are high level and UI agnostic, that calls into a "workflow" class which groups the details like "click this button", "populate this field" into a method that captures the workflow under test, which calls into a "screen driver" class that handles the UI automation for that particular screen. That way all the UI automation code is abstracted away from the step definitions and are in a single location, and if the UI change, you just have to change the code in the "screen driver" instead of all multiple tests. Here is the relevant page where it is discussed.
What does the BDD is describing?
In teams following a Behaviour Driven Development (BDD), the Acceptance Criteria (aka Rules) should describe "what the system does" instead of "how the system does it".
So, where are the UI/UX details are captured in a team which is following BDD?
In teams using BDD, The User Interface (UI) and User Experience (UX) (buttons, clicks, animations, etc) layer details should not be included as an Acceptance Criteria (aka Rules) in text format, under a ticket (e.g. issued with a Software Ticketing Tool such as JIRA, GitLab, etc). Instead they should be included within the design screens (wireframes, user journey, individual screens etc). For example, text notes may be embedded on the design screens with annotations, or just as comments next to the screens.
来源:https://stackoverflow.com/questions/10356005/is-bdd-really-applicable-at-the-ui-layer