integration-testing

How can we decide which testing method can be used?

倖福魔咒の 提交于 2019-11-28 11:21:18
问题 i have project in .net , i want to test it. But i dont know anything about testing and its method. how can i go ahead with testing. which method is better for me for begining? Is there anything to decide which testing method is taken into account for better result? 回答1: There is no "right" or "wrong" in testing. Testing is an art and what you should choose and how well it works out for you depends a lot from project to project and your experience. But as a professional Tester Expert my

How does the In-Memory HttpServer know which WebAPI project to host?

荒凉一梦 提交于 2019-11-28 10:58:12
I want to run tests against WebAPI project using a popular in-memory hosting strategy. My tests reside in a separate project. Here's the start of my test [TestMethod] public void TestMethod1() { HttpConfiguration config = new HttpConfiguration(); config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new {id = RouteParameter.Optional}); HttpServer server = new HttpServer(config); HttpMessageInvoker client = new HttpMessageInvoker(server) } The client is initialized with the HttpServer, establishing the direct client-server connection. Other than

AppSettings.json for Integration Test in ASP.NET Core

旧街凉风 提交于 2019-11-28 10:48:57
I am following this guide . I have a Startup in the API project that uses an appsettings.json configuration file. public class Startup { public Startup(IHostingEnvironment env) { var builder = new ConfigurationBuilder() .SetBasePath(env.ContentRootPath) .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true) .AddEnvironmentVariables(); Configuration = builder.Build(); Log.Logger = new LoggerConfiguration() .Enrich.FromLogContext() .ReadFrom.Configuration(Configuration) .CreateLogger(); } The particular part I'm looking at is the env.ContentRootPath . I did some digging around

Compile tests with SBT and package them to be run later

≯℡__Kan透↙ 提交于 2019-11-28 10:05:49
Im working with SBT and Play! Framework. Currently we have a commit stage in our pipeline where we publish to artifactory our binaries. The binaries are generated with the dist task. The pipeline then runs smoke and acceptance tests that are written in scala. They are run with sbt. What I want to do is to compile the smoke and acceptance tests as well as the binary and publish them to artifactory. That will allow the pipeline to download these binaries (the test suites) and run them, instead of recompiling them every time, which takes a long time. I tried sbt test:compile which generates the

How to run a single specific test case when using protractor

若如初见. 提交于 2019-11-28 09:38:36
I am using protractor for angular js testing in my app and have around 19 test cases at the moment, of which one of them is failing describe('Login page', function() { beforeEach(function() { browser.ignoreSynchronization = true; ptor = protractor.getInstance(); }); it('should contain navigation items', function(){ //test case code here }); it('should login the user successfully', function(){ //test case code here }) }); Currently, I run all the test cases. But, how can I run just one test case to debug an issue for example one which is described as "Login page should login the user

Cucumber: Scenario Outline reusing examples table

雨燕双飞 提交于 2019-11-28 08:52:29
问题 I have a few tests like below: Scenario Outline: Add two numebrs Given two numbers <number_1> and <number_2> When I add them Then Result is <number_3> Examples: |number_1|number_2|number_3| |2 |3 |5 | |1 |2 |3 | Scenario Outline: Update two numebrs Given two numbers <number_1> and <number_2> When I update them Then Result is <number_3> Examples: |number_1|number_2|number_3| |2 |3 |5 | |1 |2 |3 | For each test I should add the same table Examples . Is any way to extract this table to use the

Rails integration test with selenium as webdriver - can't sign_in

不想你离开。 提交于 2019-11-28 06:50:05
Hi I have a very simple integration test require 'integration_test_helper' Capybara.current_driver = :rack_test class AdminSignsInTest < ActionDispatch::IntegrationTest test 'can sign in' do email = 'bob@example.com' password = 'secret_password' Admin.create email: email, password: password visit new_admin_session_path fill_in 'admin_email', with: email fill_in 'admin_password', with: password click_button I18n.t('devise.views.sign_in') assert_equal I18n.t('devise.sessions.signed_in'), find('p.notice').text end end When I set Capybara driver to rack_test test passes, but when I set it to

Xcode project how to detect target programmatically or how to use env vars

空扰寡人 提交于 2019-11-28 06:18:38
I want to do an Application test that parses some json, stores to core data, and reads out some objects. How can my code know if it's being run as part of a test or normal run? Just some way to know "are we in test target"? Because the app when it fires up now kicks off a bunch of requests to populate my coredata with info from the server. I don't want it to do this during my tests. I want to fire up the App, read HARDCODED json from a file and store this using the same methods as otherwise into coredata, and verify the results. If someone could explain how to pass specific key-value pairs on

User Interface Testing

江枫思渺然 提交于 2019-11-28 05:03:05
We are working on a large project with a measure of new/modified GUI functionality. We've found in the past that we often introduced new problems in related code when adding new functionality. We have non-technical users perform testing, but they often miss parts and allow bugs to slip through. My question: Are there any best practices for organizing the UI testing of a WinForms project? Is there any way to automate it? Thanks! There are GUI testing tools that will click buttons and stuff for you but they're pretty fragile in my experience. The best thing to do is to keep your UI layer as thin

JUnit test report enrichment with JavaDoc

南楼画角 提交于 2019-11-28 03:57:01
For a customer we need to generate detailed test reports for integration tests which not only show, that everything is green, but also what the test did. My colleagues and I are lazy guys and we do not want to hack spreadsheets or text documents. For that, I think about a way to document the more complex integration tests with JavaDoc comments on each @Test annotated method and each test class. For the test guys it is a good help to see to which requirement, Jira ticket or whatever the test is linked to and what the test actually tries to do. We want to provide this information to our customer