specflow

specflow fails when trying to generate test execution report

家住魔仙堡 提交于 2019-12-05 04:02:59
I've got a project that's using SpecFlow, NUnit and Coypu to do acceptance tests on a web application. I've got the project building OK via Jenkins on a build server. Jenkins calls a psake script which runs msbuild on the specs project, then the script calls nunit-console to run the specs/tests, and then I want to generate a report from SpecFlow. Framework "4.0" task Default -depends RunSpecs task BuildSpecs { $env:EnableNuGetPackageRestore = "true" msbuild /t:Rebuild ReturnsPortal.Specs.csproj } task RunSpecs -depends BuildSpecs { exec { & "C:\path\to\NUnit 2.5.9\bin\net-2.0\nunit-console-x86

SpecFlow: ClassInitialize and TestContext

独自空忆成欢 提交于 2019-12-05 03:32:59
first of all I'm new to SpecFlow. I have a feature file which I have / want to automate using MSTest to run as a functional test involving a fully set up server, data access ... For this purpose I have to configure the server with the data in the SpecFlow's 'Given' blocks and start it afterwards. I also have to copy some files to the test's output directory. In the non-SpecFlow functional tests I was using the ClassInitialize attribute to get the TestDeploymentDir from the TestContext; something like this: [ClassInitialize] public static void ClassSetup(TestContext context) {

Why should we use coded ui when we have Specflow?

陌路散爱 提交于 2019-12-05 03:30:01
We have utilized Specflow and WatIn for acceptance tests at my current project. The customer wants us to use Microsoft coded-ui instead. I have never tested coded ui, but from what I've seen so far it looks cumbersome. I want to specify my acceptance tests up front, before I have a ui, not as a result of some record/playback stuff. Anyway, can someone please tell me why we should throw away the Specflow/watin combo and replace it with coded ui? I've also read that you can combine specflow with coded ui, but it looks like a lot of overhead for something which I am already doing fine in specflow

Clean Up after Canceling tests

寵の児 提交于 2019-12-05 03:19:40
I'm currently running tests through visual studio. Before all the tests are run, I automatically create a set number of users with know credentials, and at the end of the run I delete those users. However, sometimes I need to cancel my tests midway. In these cases the test never gets the chance to clean up, this means that there is left over fake user info from the test run and may causes the next test run to crash (when it attempts to add user info into the DB). Is there anyway to force visual studio/mstest to run a clean up method even if the test is canceled? I know one option is to have

Error with specflow in visual studio 2012 with <unitTestProvider>

谁说我不能喝 提交于 2019-12-05 02:49:51
I am using selenium, Specflow and nUnit to run automated tests with visual studio. The code was working with visual studio 2010. After I installed visual studio 2012, selenium and Specflow again (I think I did it right), it stopped working. First, the steps didn't recognize their definitions and I think I fix it writing [Binding] in all the c# files that contains their definitions. Now when I try to build the project it shows an error that I can't solve to all feature files. It shows 50 errors (the same number of Specflow feature files I have), they are all the same, and it says this: Error:

Set Nunit TimeoutAttribute from SpecFlow

谁说我不能喝 提交于 2019-12-05 01:30:18
问题 I've written several long running end to end integration tests using SpecFlow, but they are failing due to Nunit timeouts. Adding the [Timeout(x)] attribute to the TestFixture solves the issue, but of course gets overwritten everytime the feature is updated. How can I remove or extend the timeout in a way that SpecFlow will respect? 回答1: I am only getting to understand Specflow but could you implement a custom tag that could do this? Maybe you could place these at the BeforeScenario or

Run SpecFlow tests without Visual Studio

岁酱吖の 提交于 2019-12-05 01:17:01
I would like out QA team to be able to run SpecFlow tests. I would like them to be able to change values and append more scenarios. These appended scenarios will have matching step definitions, so they only need to modify the features. The QA team does not have access to Visual Studio. Is it possible to achieve this without using Visual Studio? We are currently using MS Test but we are willing to use NUnit if that will help. Yes - there is a 'simple' way. Since SpecFlow merely generates tests from the text in the .feature files you can use the command line runner of the tool of your choice.

Passing arrays of variable in specflow

耗尽温柔 提交于 2019-12-05 01:04:13
Is there a way to pass an array of parameters instead of passing each parameter individually? For example I have the following scenarios: When i login to a site then <firstname>, <lastname>, <middleName>, <Desingation>, <Street>, <Apartmentno> are valid The list can go on above. Instead can I pass all the above variables in an array? You can pass a comma separated string and then transform it into a list: When i login to a site then 'Joe,Bloggs,Peter,Mr,Some street,15' are valid [Then("'(.*)' are valid")] public void ValuesAreValid(List<String> values) { } [StepArgumentTransformation] public

Is it possible to programmatically add lines to a scenario?

江枫思渺然 提交于 2019-12-05 00:46:52
问题 I would like to add the same line to the start of each one of my SpecFlow tests. This line specifies a list of several scenarios which will change over time, and therefore it is not feasible to maintain this list for every test. For example: Given I have set my site theme to <MyTheme> |Theme Names| |Theme 1 | |Theme 2 | |Theme 3 | |Theme 4 | |Theme 5 | I'd like to have this test repeat for each of the themes. The list of themes is not set in stone, and should be maintained in a single place.

Using Specflow scenarios for both intergration tests and unit tests

别等时光非礼了梦想. 提交于 2019-12-05 00:39:10
问题 I've just come across BBD and specflow and it looks very interesting. When writing user stories they are typically on a high-level and the actor users the GUI. So when writing scenarios they will typically be GUI test or integration test from a high level of the system. But what about unit test further down in the solution? E.g. service endpoints, business objects, etc. Should I write new scenarios for those or is there a way to reuse the same scenarios for low level testing (unit tests) or