Modern Unit Testing practices act as a conduit for improved software designs that are more amenable to change and can be easily backed by automation for fast feedback on quality assurance. The necessity of reducing external dependencies forces us to design our modules with minimum coupling which can then be leveraged both at the module, component and subsystem levels in our testing. As we start to integrate our units into larger blocks and interface our resulting components with external systems we find ourselves switching nomenclature as we progress from Unit to Integration testing. But is a change in mindset and tooling really required?
The xUnit testing framework is commonly perceived as an aid to Unit Testing but the constraints that it imposes on the architecture mean that it is an excellent mechanism for invoking arbitrary code in a restricted context. Tests can be partitioned by categorisation at the test and fixture level and through physical packaging leading to a flexible test code structure. Throw in its huge popularity and you have a simplified learning curve for expressing more that just unit tests.
Using scenarios from his current system Chris aims to show how you can use a similar format and tooling for unit, component and integration level tests; albeit with a few liberties taken to work around the inherent differences with each methodology.
8. Stream of Consciousness
• Developer Driven Testing
• The Essence of (x)Unit Testing
• Those Pesky Dependencies
• Code & Test Evolution in Practice
9. Test == Specification
public void Execute_Should_Elide_Agreement_When_No_Trades_Match()
{
var trades = new List<Trade> { new Trade("trade-id", "product-a") };
var agreement = new Agreement("product-b");
var task = new PreparationTask(trades, agreement);
var result = task.Execute(s_services);
Assert.That(task.Agreement, Is.Null);
}
10. Consistent Style
public void a_c_sharp_test()
{
var arrangement = new Arrangement();
var result = arrangement.action();
Assert.That(result, Is.EqualTo(expectation));
}
create procedure a_sql_test
as
declare arrangement varchar(100),
result varchar(100)
exec action @input = arrangement,
@output = result
exec AssertAreEqual @result, "expectation"
go
12. Promotes Arbitrary Code Execution
public void Prepare_Should_Elide_Agreement_When_No_Trades_Match()
{
var trades = new List<Trade> { new Trade("trade-id", "product-a") };
var agreement = new Agreement("product-b");
var task = new PreparationTask(trades, agreement);
var result = task.Execute(s_services);
Assert.That(task.Agreement, Is.Null);
}
LibraryEXE Stub
Test Runner LibraryTestsDebugger
Custom Test
Harness
13. Automated Testing
• Lowers the barrier to running tests
• Regression testing is implicit
• Build server watches your back
14. Stream of Consciousness
• Developer Driven Testing
• The Essence of (x)Unit Testing
• Those Pesky Dependencies
• Code & Test Evolution in Practice
19. Database
• Per-user / per-branch workspace
• Only need schema not data (Integration)
• Can reuse existing unit test database
• Use same code revision for compatibility
• Use transactions to avoid residual effects
• Fake tables with CSV files
20. Database Asserts
public void AddCustomer_Should_Persist_The_Customer()
{
const id = 1234;
const name = "name";
var customer = new Customer(. . .);
using (var connection = AcquireConnection())
{
CustomerDataMapper.AddCustomer(customer, connection);
Assert.That(RowExists("dbo.Customer",
" CustomerId = {0}"
+ " AND CustomerName = '{1}'",
id, name),
Is.True);
}
}
21. Database SetUp/TearDown
[TestFixture, TestCategory.DatabaseTest]
public class SomeEntityTests : DatabaseTestBase
{
[TestFixtureSetUp]
public void FixtureSetUp
{
using(var connection = AcquireConnection())
{
connection.Execute("insert into thingy_table values(1, 2, 3)");
connection.Execute("test.InsertThingy(1, 2, 3)");
}
}
[TestFixtureTearDown]
public void FixtureTearDown
{
using(var connection = AcquireConnection())
{
connection.Execute("delete from thingy_table");
connection.Execute("test.DeleteAllThingys");
}
}
}
22. Helper Base Class
public class DatabaseTestBase
{
public ISqlConnection AcquireConnection()
{
return . . .
}
. . .
public bool RowExists(string table, string where, string params[])
{
string filter = String.Format(where, params);
string sql = String.Format(
"select count(*) as [Count] from {0} where {1}"
, table, filter);
using (var connection = AcquireConnection())
{
var reader = connection.ExecuteQuery(sql);
return (reader.GetInt("Count") == 1);
}
}
. . .
}
26. Initial System Test
Market
Data Service
Trade
Data Service
Analytics Service
Calculator
Test Runner
System Tests
[Test, TestCategory.SystemTest]
public void Calculate_Answer()
{
. . .
var result = c.calculate();
Assert.Equal(result, 42);
}
27. Addressing External Risks
External Market
Data Service API
External Trade
Data Service API
External Market
Data Service Tests
External Trade
Data Service Tests
Test Runner
28. Internal Service Design
External Service
API
External Service
Tests
Internal Service
External
Service Facade
Internal Service
Tests
Mock
External Services
Performance
Test Runner
Mock Service
29. Data Access Layer
Database
Public Interface
Database Unit
Tests
Data Access
Layer
Data Access
Layer Tests
Database API
Mock Database
API
Mock Data
Access Layer
30. Database
Public Interface
External Analytics
Service
External Market
Data Service API
External Market
Data Service API
System Evolution
Mock Market
Data Service
Mock Trade
Data Service
Mock Analytics
Service
Calculator
Test Runner
Unit / Integration
/ System Tests
[Test, TestCategory.SystemTest]
public void Calc_Answer_For_ABC_Plc()
{
. . .
var result = c.calculate();
Assert.Equal(result, 41.75);
}
Mock Data
Access Layer
Market
Data Service
Trade
Data Service
Analytics Service
Data Access
Layer
-It’s all about testing units as they get bigger and bigger
-Questions at the end
-Simple black box testing with test cases is unrealistic (for me)
-Simple scalar inputs - No I/O or threads
-What about exceptions?
-Users at periphery– so User Acceptance Tests only a small part
-The vast majority of tests are internal and of a technical nature
-Crosses subsystems and internal/external systems – lots of I/O, threads etc.
-Different interpretations of what a Unit is
-This talk is developer focused, i.e. biased towards white box testing
-Layers of testing just like layers of code
-More layers means more dependencies & slower feedback
-New tests become regression tests if kept
-Naming is most important aspect and gets harder as you acquire layers
-Example test for a workaround (i.e. not in specification)
-Most tests are non-trivial and require non-trivial inputs
-xUnit brings familiarity across the technology stack
-One size doesn’t fit all but it can fit many cases
-Our old friend Mr Coupling
-Programming to an interface (quite literally in modern languages, but not in C/C++)
-All testable logic in libraries, executables just bootstrapping stubs
-Faster debug time as it is often easier to reproduce the problem as it requires less setup
-Debug service as in-proc process
-Script + command line vs Test configuration (cf Resharper)
-Inside vs outside your control (dealing with failed builds)
-Memory is transient – automatic cleanup
-Shadow cache can help and hinder
-Need to use SetUp & TearDown to ensure no persistent effects after failure
-Sometimes tests are inconclusive (external service down)
-FS often considered a dependency acceptable in unit testing
-Should you mock the file-system? Yes.
-VCS folder is part of source code (ie isolated), relative paths tend to be stable
-Build server is fixed common point but needs manual versioning
-Resource files (.zip within .rc, unpack to TEMP etc)
-ASSERTS? Test for presence (simple) or compare contents (harder)
-TEMP known directory, but for multiple builds need to isolate branches (e.g. use PID)
-Output needs configuring but is isolated & also transient
-Desktop vs RDBMS (Lite editions)
-Not testing procedures (integration) - testing interaction with procedures
-Assume underlying DB is already unit tested
-Build on top of unit test database
-Use transactions to help avoid residual effects - control the connection pool and therefore the transaction lifetime
-For entities - row exists, n rows exist
-Do you check all attributes or just entity ID?
-Beware of caching inside frameworks like Hibernate if using reader to test writing by round-tripping entities
-As &apos;dbo&apos; you should have complete access and can circumvent permissions to access the underlying tables when required
-Fixture SetUp/TearDown for static data (e.g. currencies) to satisfy foreign key constraints
-Write separate helper functions (in SQL?) to add remove common data (using other schema?)
-Use existing database API where possible to make tests less brittle
-In T-SQL use RaiseError to throw an exception
-Verifying the external API contract to detect changes (write tests instead of prototyping)
-Helps to test your façade
-Highly unreliable (failed automated builds causes noise)
-Your UAT output can be someone else’s DEV input
-Writing? (publish to bus/share)
-Current system evolution from .exe stub to many components
-Explain very briefly the overall architecture
-Highlight where we want to provide tests
-Key area of Risk –&gt; the external services
-Start with in-process services to enable end-to-end testing from the start
-Services initially just stubs with interfaces to allow mocking
-External service façade provides a mocking point, especially when external API is all concrete types
-External service API tested
-Internal service can be component tested with mock service
-Internal service can be independently integration tested
-Mocks became production components
-Hubert’s colour schemes -&gt; every blue box should have a purple or green box feeding it
-Stubs replaced with mocks to facilitate system evolution and integration testing
-Once real data can be provided through the services a realistic system test can be coded
-File-system used to provide fixed set of test data for one counterparty
-Mix & match different mocks and fakes for different tests