Catalogue Browser

Showing 57 results

Lazily Evaluate Values

Assertion Refactorings

Motivation

Lazily evaluated attributes and values are only computed when they are needed.

In tests, some components are good candidates for being lazily evaluated, i.e., SUT dependencies and assertions.

For instance, assertions' message of failure that are expensive to compute may benefit from lazy evaluation.

Qualities

  • Efficiency

Code Demonstration

BEFORE
assertEquals(expected, actual, message: slowComputation())
AFTER
message = () -> slowComputation()
assertEquals(expected, actual, message)

Sources

StackOverflow

Consolidate Multiple Assertions into a Fluent Assertion

Assertion Refactorings

Motivation

Test cases with many assertions are known as the assertion roulette test smell.

One way to fix it is replacing all assertions that have the same actual value with one single Fluent Assertion.

Fluent Assertions enables chaining together different condition checks as unary method invocations.

In comparison, traditional assertions take up to three parameters, therefore they are subject to confusion, e.g., failing a binary assertion with a custom message is expressed differently in JUnit 4 assertEquals(message, expected, actual), JUnit 5 assertEquals(expected, actual, message), and TestNG assertEquals(actual, expected, message)

Conversely, in AssertJ, the same assertion would look like assertThat(actual).withFailMessage(message).isEqualTo(expected).

Qualities

  • Uniformity

Code Demonstration

BEFORE
assertEquals(9, actual.size())
assertTrue(actual.contains(element))
AFTER
assertThat(actual)
  .hasSize(9)
  .contains(expected)

Group Multiple Assertions

Assertion Refactorings

Motivation

Test Methods with multiple assertions may have its execution interrupted prematurely, thus preventing the evaluation of the remaining assertions (Soares et al., 2023).

Grouping multiple assertions (a₁, a₂,...,aₙ) with JUnit 5's assertAll(()->a₁, ()->a₂,..., ()->aₙ) assures that they execute and their result are outputted to the test report before terminating the test suite execution.

Qualities

  • Effectiveness

Code Demonstration

BEFORE
assert(a₁)
...
assert(aₙ)
AFTER
AssertAll(
  () -> assert(a₁)
  ...
  () -> assert(aₙ)
)

Instances

Sources

Mining DatasetsLiterature (Soares et al., 2023)

Replace Assertion

Assertion Refactorings
Variants
1. Replace the Not Operator2. Split Conditional Parameters3. Replace Reserved Words

Motivation

1. Having a conditional logic in the assertions using the not (!) operator affect their readability.

2. Forced conditional expressions into assertTrue/assertFalse methods to verify objects equality should be replaced.

3. Developers should use special assertions (e.g. assertNull) instead of passing reserved words.

Qualities

  • Uniformity
  • Effectiveness
  • Simplicity

Code Demonstration

BEFORE
assert(!condition)
assert(actual == expected)
assert(actual == true)
assert(actual == null)
AFTER
assertFalse(condition)
assertEquals(expected, actual)
assertTrue(actual)
assertNull(actual)

Enforce Use of Type Inference

Assertion Refactorings

Motivation

As shown by Kashiwa et al. (2021) Change Return Type and Add Parameter refactorings in production APIs are mainly responsible for breaking tests, and require 4-12 lines of changes to fix the tests they break.

A good practice to avoid or limit the impact of such breaking changes in tests is to utilize the var keyword (Java 10 feature) in the place of type references within the tests.

When the var keyword is used, JVM applies type inference to detect automatically the datatype of a variable based on the surrounding context.

This approach makes the tests more resilient to SUT type-related changes.

Qualities

  • Brittleness

Code Demonstration

BEFORE
Type variable = new ConcreteType()
AnotherType result = methodCall()
List<SpecificType> list = getItems()
AFTER
var variable = new ConcreteType()
var result = methodCall()
var list = getItems() 

Instances

Sources

Monitoring

Abstract away SUT's Construction

Fixture Refactorings
Variants
1. AutoFixture2. Object Mother3. Test Data Builder with Fluent API

Motivation

1. C# only, AutoFixture works as a generic Test Data Builder and an Auto-Mocking container (SO 2056602) built upon sensible defaults for different attribute types, e.g., DateTime.Now (SO 2622334).

2. Sometimes the same hard-to-build objects can be reused throughout the test suite; ObjectMother (Schuh and Punke, 2001; Fields, 2014) encapsulates the complex build logic into its static methods for reuse and code clone elimination.

3. Unlike ObjectMother, "Test Data Builder copes well with variation in test data" (Fields, 2014). By providing a customizable test object creation with sensible defaults, Test Data Builder favours transparency over conciseness.

Qualities

  • Changeability
  • Uniformity
  • Reliability
  • Brittleness
  • Reusability
  • Efficiency

Code Demonstration

BEFORE
@Test  
void t() {  
    SUT sut = new SUT(new ComplexDependency(
        new NestedDependency(...),  
    ...))  
}  
AFTER
// Variant (1) - C# only
[Test]  
void t() {  
    Fixture fixture = new Fixture()
    SUT sut = fixture.Create<SUT>()
}  

// Variant (2)
@Test  
void t() {  
    SUT sut = ObjectMother.CreateDefaultSUT()  
} 

// Variant (3)
@Test  
void t() {  
    SUT sut = new SutBuilder()  
        .WithParam1(expression)  
        .WithParam2(expression)  
        .Build()  
}  

Extract Preconditions

Fixture Refactorings
Variants
1. Extract Preconditions to Class-level Fixture2. Extract Custom Utility Assumption Method3. Replace Branch with Guard Assertion4. Replace Branch with Conditional Execution5. Guard tests with try-catch-assume

Motivation

Preconditions can either be expressed through special assertions (e.g., JUnit 5 assumptions) or regular ones.

It all comes down to whether an unmet precondition should be interpreted as a test failure or not as JUnit 5 assumptions behave similarly to assertions except they abort tests without failing them.

Preconditions are a set of rules to determine whether it makes sense to continue the execution of specific test methods.

1. Placing preconditions in a class-level fixture is particularly useful for efficiency as it aborts the execution of all tests in a test class as soon as the preconditions fail once.

2. Custom Utility Assumption Methods assist developers keeping a consistent mechanism to skip specific test methods in a test class.

3. Tests can contain code that may or may not be executed due to branching logic. The branching logic may cause the test to finish without executing the intended assertion(s). Adding guard assertions (Meszaros, 2007) to the beginning of tests guarantee the test code will only execute when established preconditions are met.

4. Conditional Execution (Soares et al., 2023), introduced in JUnit 5, enables test skipping according to programmatic conditions. Unlike guard assertions, Conditional Execution are expressed as method annotations, which further separate test logic from its preconditions.

5. When conditions are not clear or easily computed, expecting exception could be a valid alternative. Try-catch statement with failing assumptions in the catch-block may be leveraged to make exceptions disable a test rather than failing or passing it. Such a workaround compensates for the lack of an Assumptions.doesNotThrow API in JUnit.

Qualities

  • Efficiency
  • Reliability
  • Uniformity

Code Demonstration

BEFORE
@Test  
void t() {  
    if (C) {  
        stmt
    }
}
AFTER
// Variant (1)
@BeforeAll  
static void setUpClass() {  
    assumeTrue(C) 
}  

@Test  
void t() {  
    stmt  
}  

// Variant (2)
@Test  
void t() {  
    assumePrecondition() 
    stmt  
}  

// Variant (3)
@Test  
void t() {  
    assertTrue(C)
    stmt  
}  

// Variant (4)
@Test  
@EnabledIf("C") 
void t() {  
    stmt  
}  

// Variant (5): Provided stmt throws E when C is false
@Test  
void t() {  
	try {
		stmt
	} catch (E) {
		assumeTrue(false)
	}
}

Sources

MonitoringStackOverflowLiterature (Meszaros, 2007; Soares et al., 2023)

Reuse code with Fixture / Extract Fixture

Fixture Refactorings

Motivation

Test methods within a single test class may evolve and share many code similarities.

In that case, unless the duplication only affects few methods in the test class, it is advisable to place the duplicated code into a test fixture, e.g., set-up or tear-down methods.

The former comprises test data creation and configuration, whereas the latter should clean up the environment and destroy test objects safely.

If the desired test fixture type already exists in the test class, one can reuse it.

Otherwise, one extracts a new fixture containing the duplicated code.

Qualities

  • Reliability
  • Uniformity
  • Reusability
  • Changeability
  • Simplicity

Code Demonstration

BEFORE
@Test  
void t1() {  
    setUp
    stmt
    teardown
}

@Test  
void t2() {  
    setUp
    stmt'
    teardown
}  

@AfterEach
void tearDown() {
    teardown'
}
AFTER
// Extract Fixture
@BeforeEach  
void setUp() {  
    setUp
} 

// Reuse Code with Fixture
@AfterEach  
void tearDown() {
    teardown  
    teardown'
}  

@Test  
void t1() {  
    stmt 
} 

@Test  
void t2() {  
    stmt'
}

Inline Fixture

Fixture Refactorings

Motivation

As new methods are added and others removed, test fixtures may become obsolete and less cohesive, i.e., General Fixture test smell.

As a consequence, one may inline the test fixture, introducing code duplication into the few test methods that depend on it.

Qualities

  • Changeability
  • Independence

Code Demonstration

BEFORE

@BeforeEach  
void setUp() {  
    setUp
} 

@AfterEach  
void tearDown() {  
    teardown
}  

@Test  
void t1() {  
    stmt  
}  

@Test  
void t2() {  
    stmt'  
}  
AFTER

@Test  
void t1() {  
    setUp
    stmt
    teardown 
}  

@Test  
void t2() {  
    setUp
    stmt'
    teardown 
} 

Sources

Mining DatasetsStackOverflow

Minimize Fixture

Fixture Refactorings

Motivation

Diverging context of test methods in a single class leads to a General Fixture, i.e., a fixture whose statements inconsistently target different subsets of the test methods.

"Minimizing fixtures make tests better suitable as documentation and less sensitive to changes" (Deursen et al., 2001)

Qualities

  • Independence

Code Demonstration

BEFORE

@BeforeEach  
void setUp() {  
    setUp
    setUp'  
    setUp''
}

@Test
void t1() {
    stmt
}

@Test
void t2() {
    stmt'
}  
AFTER
@BeforeEach
void setUp() {
    setUp
}

@Test
void t1() {
    setUp'
    stmt
}

@Test
void t2() {
    setUp''
    stmt'
}

Sources

Mining DatasetsStackOverflowLiterature (Meszaros, 2007; Deursen et al., 2001)

Override Fixtures

Fixture Refactorings

Motivation

Despite the risk of introducing redundancy and inter-test dependency, inheritance is extensively used in Java tests.

For reusing tests under different environment settings, testers extract subclasses that override test fixtures.

However, JUnit 5 changed how overriding test fixtures are executed.

While in JUnit 4 test fixtures implemented in subclasses are always executed, JUnit 5 requires the subclass's fixture to be annotated with one of the test fixture annotations (i.e., @BeforeEach, @BeforeAll, @AfterEach, or @AfterAll) even when that particular test fixture overrides a test fixture in the superclass that is annotated as such (Travis, 2018).

Qualities

  • Changeability
  • Reusability

Code Demonstration

BEFORE
class C1 {
    @BeforeEach  
    void setUp() {  
        setUp      
    }
} 

class C2 {
    void setUp() {  
        setUp              
        setUp'  
    }  
}
AFTER
class C1 {
    @BeforeEach  
    void setUp() {  
        setUp 
    }  
}

class C2 extends C1 {
    @Override  
    @BeforeEach  
    void setUp() {
        super.setUp()
        setUp'  
    }  
}

Inject SUT Dependency

Fixture Refactorings
Variants
1. Constructor-based Dependency2. Replace Fixture with Dependency Injection

Motivation

1. Complex SUT dependencies are good mocking candidates: injecting such dependencies into the SUT constructor often enables to evaluate the interaction with the mocked dependency.

2. Some dependency injection containers, e.g., Spring and Guice, support constructor injection, but test classes usually have a setup method (a test fixture) rather than a constructor. To take advantage of Dependency Injection, your test fixture should be replaced.

Qualities

  • Reliability
  • Uniformity
  • Brittleness

Code Demonstration

BEFORE
class C {
    @BeforeEach
    void setUp() {
    	var dep = new DependencyImpl()
        sut = new SUT(dep)     
    }
}
AFTER
// Variant (1)
class C {
    @BeforeEach
    void setUp() {
        Dependency dep = mock(Dependency.class)  
        SUT sut = new SUT(dep)  
    }
}

// Variant (2)
class C {
    @Autowired
    C(Dependency dep) {
        sut = new SUT(dep)     
    }
}

Replace Class Fixture with Method Fixture

Fixture Refactorings

Motivation

The components defined in a test fixture should not be modified during the test execution unless the test fixture runs once per test method.

Qualities

  • Independence

Code Demonstration

BEFORE
@BeforeAll
void setUp() { }
AFTER
@BeforeEach
void setUp() { }

Replace Method Fixture with Class Fixture

Fixture Refactorings

Motivation

Sometimes a test fixture does not have to execute once for each test method, thus, constraining its execution to once for all methods in a test suite can speed-up test execution.

Qualities

  • Efficiency

Code Demonstration

BEFORE
@BeforeEach
void setUp() { }
AFTER
@BeforeAll
void setUp() { }

Split Fixture

Fixture Refactorings

Motivation

When performance is a concern, testers may prefer a setupClass method rather than a setup method, but there is a caveat, not all statements are safe for setupClass.

Then, developers can split the time-consuming part of a fixture, which goes into setupClass, from the part that require execution before each test, which goes into setup.

Qualities

  • Efficiency

Code Demonstration

BEFORE
@BeforeEach
void setUp() {
    stmt
    stmt'
}
AFTER
@BeforeAll
void setUpClass() {
    stmt
}

@BeforeEach
void setUp() {
    stmt'
}

Sources

Mining DatasetsStackOverflow

Merge Fixture

Fixture Refactorings

Motivation

Merge separate fixture methods to simplify setup logic when separation is unnecessary.

Qualities

  • Changeability
  • Simplicity

Code Demonstration

BEFORE
@BeforeAll
void setUpClass() {
    stmt
}

@BeforeEach
void setUp() {
    stmt'
}
AFTER
@BeforeEach
void setUp() {
    stmt
    stmt'
}

Sources

Mining Datasets

Parameterize Test with Framework Support

Test Method Refactorings
Variants
1. Extract Common Logic from Multiple Test Methods2. Merge Data Provider3. Multiple Assertion Type Test become Conditional Parameterized Test4. Multiple Data and Multiple Algorithms become Parameterized Test with Inheritance and Fixture Overrides

Motivation

1. The logic of many test methods may share similarities, i.e., test code duplication. When multiple methods only differ in terms of data input, it is a good candidate for parameterization.

2. JUnit 5’s and TestNG’s parameterized test class have at least one data provider, but having many may compromise the tests’ cohesion and maintainability.

3. To accommodate variation in the reuse of mostly similar test methods, parameterized tests may include flag parameters to enable/disable test methods with assumptions or assertions guarded by if-statements.

4. When multiple implementations with similar interfaces and behaviour (e.g., sorting algorithm) must be tested for multiple data, one may parameterize the test and extract test subclasses to build each implementation in a setup fixture.

Qualities

  • Reusability
  • Reliability
  • Uniformity

Code Demonstration

BEFORE
// Variant (1)
@Test  
void t1() {  
    stmt¹¹(P₁₁) 
    ...
    stmt¹ⁿ(P₁ₙ)
} 
...
@Test  
void t2() {  
    stmtᵐ¹(Pₘ₁) 
    ...
    stmtᵐⁿ(Pₘₙ)
}  

// Variant (2)
@CsvSource({
    P₁₁, ..., P₁ₙ
})  
void t1() { } 

@CsvSource({
    Pₘ₁, ..., Pₘₙ
})  
void t2() { } 
AFTER
// Variant (1) and (2)
@ParameterizedTest  
@CsvSource({
    P₁₁, ..., P₁ₙ
    ...
    Pₘ₁, ..., Pₘₙ
})    
void t(P₁, ..., Pₙ) {
    stmt¹(P₁)
    ...
    stmtⁿ(Pₙ)
}
BEFORE
void t() {
    List actual = sut.method()
    assertEquals(expcted, actual)
}
void t2() {
   int[] actual = sut.method()
   assertArrayEquals(expcted, actual)
}
AFTER
@ParameterizedTest  
@MethodSource 
void t(SUT sut) {
    actual = sut.method()
    if (P₁ instanceof P) {
        assertEquals(expcted, actual)    
    }
    else {
        assertArrayEquals(expcted, actual)
    }        
    ...
    assumeTrue(Pₙ)
    stmtⁿ(Pₙ)
}
BEFORE
class AlgorithmATest {
    Algorithm algo = new AlgorithmA()
    
    @Test
    void t() { 
        algo.process(d')
    }

    @Test
    void t2() { 
        algo.process(d'')
    }
}

class AlgorithmBTest {
    Algorithm algo = new AlgorithmB()
    
    @Test
    void t1() { 
        algo.process(d')
    }

    @Test
    void t2() { 
        algo.process(d'')
    }
}
AFTER
abstract class BaseTest {
    Algorithm algo
    
    @ParameterizedTest
    @MethodSource("data")
    void sharedTests(InputType input) {
        Result result = algo.process(input)
        assertBehavior(result)
    }
    
    static Stream<Arguments> data() {
        return Stream.of(
            Arguments.of(d'),
            Arguments.of(d'')
        )
    }
}

class AlgorithmATest extends BaseTest {
    @BeforeEach
    void setup() { algo = new AlgorithmA() }

}

class AlgorithmBTest extends BaseTest {
    @BeforeEach
    void setup() { algo = new AlgorithmB() }
}

Dependency-free Test Parameterization

Test Method Refactorings

Motivation

The logic of many test methods may share similarities, i.e., test code duplication. When test cases differ in the Arrange phase (setup), developers may choose to parameterize their fixture or utility methods. However, support is still incipient in most test frameworks (see junit5#878). As alternatives, developers leverage method and superclass extractions to achieve the same behaviour.

Qualities

  • Reusability
  • Uniformity

Code Demonstration

BEFORE
@Test 
void t1() {
    setUp(expression)
    actual = sut.method()
    assertEquals(expected, actual)
}

@Test 
void t2() {
    setUp(expression')
    actual' = sut.method()
    assertEquals(expected', actual')
}
AFTER
void t(expression, expected) {
    setUp(expression)
    actual = sut.method()
    assertEquals(expected, actual)
}

@Test 
void t1() {
    t(expression, expected)
}

@Test 
void t2() {
    t(expression', expected')
}

Enhance Test Report

Test Method Refactorings
Variants
1. Customize Error Message2. Parameterize Test Name3. Add Assertion Explanation4. Change Order of Assertion’s Parameters5. Rename Test6. Specialize Expected Exception7. Rename Data Provider

Motivation

1. Good assertions' and exceptions' error messages are useful for debugging a failing test case.

2. Test frameworks rely on method names to identify tests in a test report; conversely, parameterized tests may benefit from custom names to distinguish the execution report of each one of its parameters.

3. Having more than one assertion with no explanation in a test method is known as the Assertion Roulette test smell. The name of tests suffering with that smell may not explain well the failing assertion.

4. Testers may misplace the expected and obtained value parameters of assertions due to conflicting standards in assertions' parameters order among Java test libraries, which incurs in misleading test reports.

5. The name of test methods and classes must indicate context, trigger, and behaviour of the SUT. Projects and communities may also enforce compliance to certain naming idioms. When one of those elements changes, the names should be updated to reflect it.

6. Incorrect use of built-in exception types may introduce ambiguity. Testers disambiguate exception testing by extracting a subclass of the ambiguous exception.

7. Like test methods, data providers should be named after the data they provide and their intended purpose. Despite different syntax, all major Java test frameworks offer a powerful name parameter for data provider annotation, in which developers may describe it in human readable format.

Qualities

  • Effectiveness
  • Uniformity

Code Demonstration

BEFORE
public class AccountService {
    public void withdraw(double amount) {
        if (amount <= 0) {
            throw new IllegalArgumentException("Invalid amount")
        }
    }
}
AFTER
public class AccountService {
    public void withdraw(double amount) {
        if (amount <= 0) {
            throw new IllegalArgumentException(
                String.format("Withdrawal amount must be positive. Got: %.2f", amount)
            )
        }
    }
}
BEFORE
@ParameterizedTest
@ValueSource()
void t(params) {  }
AFTER
@ParameterizedTest(name = "Descriptive name {arguments}")
@ValueSource()
void t(params) { }
BEFORE
assertTrue(condition)
assertNotNull(obj)
AFTER
assertTrue("Condition should hold", condition)
assertNotNull("Object must be initialized", obj)
BEFORE
assertEquals(actual, expected)
AFTER
assertEquals(expected, actual)
BEFORE
@Test
void test1() { }
AFTER
@Test
void descriptiveTestName() { }
BEFORE
@Test(expected = GeneralException.class)
void t() { }
AFTER
@Test(expected = SpecificException.class)
void t() { }
BEFORE
@DataProvider(name = "The data provider")
static Object[] source() { }

@Test(dataProvider = "The data provider")
void t(params) { }
AFTER
@DataProvider(name = "Positive cases data provider")
static Object[] source() { }

@Test(dataProvider = "Positive cases data provider")
void t(params) { }

Integration Test to Unit Test

Test Method Refactorings

Motivation

There are inherent trade-offs between integration testing and unit testing (Aniche, 2022; Osherove, 2009; Vocke, 2018). Unit tests are perfect for high-churn logic and test-driven development because they are excellent at validating isolated components using mocked dependencies, offer quick execution, accurate failure localization, and require little maintenance. Their limited scope, however, runs the risk of ignoring systemic defects brought on by component interactions. On the other hand, integration tests confirm the interoperability of external systems and end-to-end workflows, revealing emergent problems like dataflow errors or protocol mismatches. However, this comes at the expense of slower execution, higher infrastructure requirements, and increased flakiness because of external dependencies. Integration tests reduce the risk of architectural misalignment, but they come with a higher operational complexity and delayed debugging than unit tests, which place more emphasis on agility and detailed feedback. The testing pyramid model (Vocke, 2018) is in line with a balanced strategy that prioritizes unit tests for core logic and saves integration tests for crucial interfaces. Breaking integration tests into unit tests consists in promoting isolation through mocking and proper set-up/clean-up between consecutive test executions. This ensures robust coverage of isolated behaviours while balancing risk mitigation with quick feedback.

Qualities

  • Brittleness
  • Independence
  • Efficiency

Code Demonstration

BEFORE
setupDatabaseConnection()
User user = userService.create("test@example.com")
teardownDatabaseConnection()
assertTrue(userExistsInDB("test@example.com"))
AFTER
InMemoryUserRepository fakeRepository = new InMemoryUserRepository()
UserService service = new UserService(fakeRepository)    
User user = service.create("test@example.com")    
assertTrue(fakeRepository.contains("test@example.com"))

Sources

StackOverflow

Replace Duplicate Assert with Repeat Tests

Test Method Refactorings

Motivation

When a test repeatedly evaluates the same assertion for a fixed number of times, the code can be simplified with @RepeatedTest.

Qualities

  • Simplicity
  • Changeability

Code Demonstration

BEFORE
@Test
void t() {
    stmt // 1ˢᵗ repetition
    assertEquals(expected, actual)
    ...
    stmt // mᵗʰ repetition    
    assertEquals(expected, actual)
}
AFTER
@RepeatedTest(m)
void t() {
    stmt
    assertEquals(expected, actual)
}

Instances

No instances recorded.

Sources

Literature (Kim et al., 2021)

Reuse Test Methods

Test Method Refactorings
Variants
1. Between Unit and Integration2. Across Projects

Motivation

1. Oftentimes, unit tests simply differ to integration test due to mocked dependencies; one may find useful to generalize the test code so real instances and mocks can be used interchangeably.

2. Including third-party test suites in a custom TestSuite facilitates seamless integration and execution of external test cases when using third-party plug-in components.

Qualities

  • Reusability
  • Reliability
  • Independence

Code Demonstration

BEFORE
@Test  
void unitT() {  
    Dependency mock = mock(Dependency.class)  
    SUT sut = new SUT(mock)  
    stmt(sut)  
    verify(mock).expectedCall()  
}

@Test  
void integrationT() {  
    Dependency real = new Dependency()  
    SUT sut = new SUT(real)  
    assertEqual(expected, stmt(sut))  
}
AFTER
abstract class BaseTest {  
    abstract Dependency dependency() 

    @Test  
    void t() {  
        SUT sut = new SUT(dependency())  
        stmt(sut)  
        assertOrVerify(expected)
    }  
}  

class UnitC extends BaseTest {  
    Dependency dependency() { 
        return mock(Dependency.class) 
    }  
}  

class IntegrationC extends BaseTest {  
    Dependency dependency() { 
        return new Dependency() 
    }  
}

Introduce Test Parameter Object

Test Method Refactorings

Motivation

Like production code methods, parameterized tests with many parameters may become hard to read. Introducing parameter objects (Fowler et al., 2012) encapsulates all parameters into one object.

Qualities

  • Uniformity

Code Demonstration

BEFORE
@ParameterizedTest  
@CsvSource({
    P₁₁, ..., P₁ₙ
    ...
    Pₘ₁, ..., Pₘₙ
})    
void t(P₁, ..., Pₙ) {  }  
AFTER
@ParameterizedTest  
@MethodSource("data")  
void t(ParamObj P) {  }  

public static StreamArguments data() {  
    return Stream.of(  
        Arguments.of(new ParamObj(P₁₁, ..., P₁ₙ)),
        ...
        Arguments.of(new ParamObj(Pₘ₁, ..., Pₘₙ))
    )  
}

Sources

StackOverflow

Extract Utility Method

Test Method Refactorings
Variants
1. Extract Act Method2. Extract Assertion Method3. Extract Fixture Utility Method4. Delegate Assertion of Disjoint SUTs with same API

Motivation

After identifying common code that can be shared between two or more test classes, utility methods may be extracted.

1. Standardize a complex MUT (Method-Under-Test) invocation throughout the test code.

2. Combines many assertion invocations into a custom higher-level assertion method with semantically accurate name useful failure messages.

3. Encapsulates environment configuration and clean up, through data creation, deletion, and transformation without depending on test framework features and conventions, which promote higher flexibility and reusability at the cost of conciseness.

4. SUTs that share public methods with the same signatures but no common interface can hinder reusability within tests. Testers may choose to extract the assertions into an utility method that expects a lambda to invoke the MUT (Method-Under-Test).

Qualities

  • Reliability
  • Uniformity
  • Changeability
  • Reusability

Code Demonstration

BEFORE
@Test
void t1() {
    stmt
    actual = sutA.method()  
    assertEquals(expected, actual)  
    stmt'
}

@Test
void t2() {
    stmt
    actual = sutA.method()  
    assertEquals(expected, actual)  
    stmt'
}

@Test
void t3() {
    actual = sutA.method()  
    assertEquals(expected, actual) 
    actual = sutB.method()  
    assertEquals(expected, actual)  
}
AFTER
// Variant (1)
void setUp() {
    stmt
}
void tearDown() {
    stmt'
}
// Variant (2)
void act() {
    actual = sutA.method()
}
// Variant (3)
void customAssert() {
    assertEquals(expected, actual)
}

@Test
void t1() {
    setUp()
    act()
    customAssert() 
    tearDown()
}
@Test
void t2() {
    setUp()
    act()
    customAssert() 
    tearDown()
}

// Variant (4)
@Test
void TSutA() {
    testMutBehavior(() -> sutA().method())
}
@Test
void TSutB() {
    testMutBehavior(() -> sutB().method())
}
void testMutBehavior(Fuc<?> func) {
    actual = func()
    assertEquals(expected, actual) 
}

Categorize Test Method

Test Method Refactorings

Motivation

Test suites may have test methods with common features to another, i.e., execution time, code size, targeted quality aspect, and technical debt. Developers may want to execute those groups separately and in a specific order (test prioritization).

Qualities

  • Efficiency

Code Demonstration

BEFORE
class C {
    @Test  
    void fastTest() {  } 
    
    @Test  
    void largeDatasetTest() { } 
}
AFTER
// Target Test
@Tag("Fast")  
@Test  
void fastTest() { }  

// Test Priorization
@Suite  
@SelectClasses({
    FastTests.class, 
    LargeDatasetTests.class
})  

class FastTests { 
    @Test
    void fastTest() { }
}  

@Tag("LargeDataset")  
class LargeDatasetTests {  
    @Test  
    void largeDatasetTest()  { }  
}  

// Method Order
@TestMethodOrder(OrderAnnotation.class)
class C {
    @Test  
    @Order(1)  
    void fastTest() { } 
    
    @Test  
    @Order(3)  
    void largeDatasetTest() { }
}

Extract Utility Class

Test Method Refactorings
Variants
1. Extract Class with many Utility Methods2. Extract Hamcrest's Matches/JMockit's Expectation3. Pull-up Method To Utility Class

Motivation

1. After identifying common code that can be shared between two or more test classes, utility methods (e.g., custom assertions, data creation, and transformation methods) can be extracted and collected into a General Utility Class.

2. Using named Expectations in JMockit allows you to set up custom shared mocks in a base class, streamlining repetitive setup across test classes.

3. Sharing utility methods among few, related test classes is possible through a common utility superclass to which utility methods can be moved.

Qualities

  • Uniformity
  • Independence

Code Demonstration

BEFORE
class C1 {
    @Test
    void t1() {
        commonUtilityMethod()
    }
    
    private void commonUtilityMethod() { }
}

class C2 {
    @Test
    void t2() {
        commonUtilityMethod()
    }
    
    private void commonUtilityMethod() { }
}
AFTER
class UtilsClass {
    public void commonUtilityMethod() { }
}

@Test
void t1() {
    commonUtilityMethod()
}

@Test
void t2() {
    commonUtilityMethod()
}
BEFORE
@Test
public void t1() {
    assertThat(expression, allOf(
        assert',
        assert''
    ))
}
@Test
public void t2() {
    assertThat(expression', not(allOf(
     assert',
     assert''
    )))
}
AFTER
public static Matcher<String> myMatcher() {
	return allOf(
	assert',
	assert''
	)
}
@Test
public void t1() {
    assertThat(expression, myMatcher())
}
@Test
public void t2() {
    assertThat(expression', not(myMatcher()))
}
BEFORE
abstract class BaseTest {	}
	
class C1 extends BaseTest {
	private void utilityMethod() { }
}
AFTER
abstract class BaseTest {
	protected void utilityMethod() { }
}
	
class C1 extends BaseTest { }

Split Test Method

Test Method Refactorings

Motivation

Test methods may evolve and accumulate multiple responsibilities, which is associated with eager test and assertion roulette test smells as well as lower cohesion. Moreover, some test frameworks (e.g., MSpec) enforce the one-Assertion per method rule. In those cases, test methods can be separated in multiple pure test methods to improve the execution trace for debugging a failure.

Qualities

  • Uniformity

Code Demonstration

BEFORE
@Test
void t() {
  assert(A)
  assert(B)
}
AFTER
@Test
void t1() { assert(A) }

@Test
void t2() { assert(B) }

Merge Test Method

Test Method Refactorings

Motivation

Test suites can accumulate redundancies over time, e.g., tests with redundant test statements. When such redundancy is scattered throughout multiple test methods, we can eliminate code duplication by merging them.

Qualities

  • Changeability
  • Efficiency
  • Simplicity

Code Demonstration

BEFORE
@BeforeEach  
void setUp() {  
    sut = new SUT() 
}  
@Test  
void t1() {  
    actual = sut.method()  
    assertEquals(expected, actual)   
}  
@Test  
void t2() {  
    actual' = sut.method'()
    assertEquals(expected, actual')  
}
AFTER
@Test  
void t() {  
    sut = new SUT()  
    
    actual = sut.method()      
    assertEquals(expected, actual)   
    
    actual' = sut.method'()  
    assertEquals(expected, actual') 
}

Sources

StackOverflowLiterature (Fang and Lam, 2015; Vahabzadeh et al., 2018)

Migrate Categories

Migration Refactorings

Motivation

In Java, test prioritization and test filtering are achieved through test frameworks' annotations.

Although TestNG's @Test(groups=), JUnit 4's @Category, and JUnit 5's @Tags are equivalent features, they differ in both flexibility and ease of use.

Tags and groups take string parameters, whereas categories expect marker interfaces.

While we can pass many parameters to groups and categories, test methods with many tags are represented with multiple annotations.

Lastly, tags integrate with other JUnit 5 features, such as custom display name for improving the test execution report.

Qualities

  • Simplicity
  • Effectiveness

Code Demonstration

BEFORE
// JUnit 4 Category Marker  
public interface FastTests {}  
@Test  
@Category(FastTests.class)  
void t() {  }  

// TestNG Group  
@Test(groups = "security")  
void securityTest() {  }
AFTER
// JUnit 5 Tag Migration
@Tag("fast")  
@Test  
void t() { }  

@Tag("security")  
@Test  
void securityTest() { }

Sources

Mining Datasets

Migrate Runner

Migration Refactorings

Motivation

JUnit supports extending and modifying its core functionalities by overriding the default test runner (JUnit 4) or registering extensions (JUnit 5). Many projects using JUnit develop custom runners or leverage third-party ones. Popular third-party runners often enhance integration with dependency injection, mocking, or web frameworks. Others backport later JUnit features, such as parameterized and exception testing. Often, popular JUnit 4 runners have corresponding JUnit 5 extensions, simplifying migration. However, migrating tests with custom runners may require adapting JUnit 4’s Runner API to JUnit 5’s Extension API

Qualities

  • Reliability

Code Demonstration

BEFORE
@RunWith(SpringJUnit4ClassRunner.class)
class SpringTest {}
AFTER
@ExtendWith(SpringExtension.class)
class SpringTest {}

Migrate Mock

Migration Refactorings
Variants
1. To Mockito2. To Powermock

Motivation

1. Ease of use and active development are among the reasons some projects migrate their mocks to Mockito.

2. Earlier versions of mock frameworks (i.e., Mockito, EasyMock, and JMock) had limited support for static methods mocking. Powermock is a reflection-powered extension to EasyMock that optionally integrates to Mockito through its Powermockito class. Among its features, it includes support for mocking static methods.

Qualities

  • Changeability

Code Demonstration

BEFORE
@ExtendWith(EasyMockRunner.class)
class C {
    @Mock Dependency dependency
    @Test
    void t() {
        expect(mock.method()).andReturn(expected)
        replay(mock)    
        SUT sut = new SUT(mock)
        assertEquals(expected, sut.method())
        verify(mock)
    }
}
AFTER
@ExtendWith(MockitoExtension.class)
class C {
    @Mock Dependency dependency
    @InjectMocks SUT sut
    @Test
    void t() {
        when(dependency.method()).thenReturn(expected)
        SUT sut = new SUT(mock)
        assertEquals(expected, sut.method())
    }
}
BEFORE
@Test
void t() {
    DatabaseUtils dbUtils = new DatabaseUtils()

    Connection conn = dbUtils.getConnection()
    assertTrue(conn.isValid(0))
}
AFTER
@RunWith(PowerMockRunner.class)
@PrepareForTest(DatabaseUtils.class)
public class DatabaseUtilsTest {
    @Test
    public void testDatabaseConnection() {
        PowerMockito.mockStatic(DatabaseUtils.class)
        Connection mockConn = Mockito.mock(Connection.class)
        Mockito.when(mockConn.isValid(0)).thenReturn(true)

        DatabaseUtils dbUtils = new DatabaseUtils()
        Connection conn = dbUtils.getConnection()
        
        assertTrue(conn.isValid(0))
        PowerMockito.verifyStatic(DatabaseUtils.class)
        DatabaseUtils.getConnection()
    }
}

Migrate Fixture

Migration Refactorings

Motivation

Test fixtures are key test framework features for reusing test code. Generally, test frameworks provide two class-fixtures and two method-fixtures. While class-fixtures run once before (or after) all tests in a class, method-fixtures run once before (or after) each test. Unlike JUnit 5, TestNG (and JUnit 4 with @RunWith(Parameterized.class)) allow test fixture to access tests' parameters.

Qualities

  • Reusability

Code Demonstration

BEFORE
@Before
void setUp() { }

@BeforeClass
static void setUpClass() { }

@After
void tearDown() { }

@AfterClass
static void tearDownClass() { }
AFTER
@BeforeEach
void setUp() { }

@BeforeAll
static void setUpClass() { }

@AfterEach
void tearDown() { }

@AfterAll
static void tearDownClass() { }

Migrate Assertion

Migration Refactorings
Variants
1. JUnit to Hamcrest2. ReflectionAssert to JUnit3. JUnit/TestNG to AssertJ4. RSpec expect change5. JUnit 4 to JUnit 5

Motivation

An adopted test library may lack some desired features available in other libraries, e.g., fluent API assertions, modular fixtures, parameterized tests, and parallel test execution.

1. Hamcrest is an external dependency of JUnit 4, which used to provide a delegate method to Hamcrest's MatcherAssert.assertThat. While JUnit 5 avoids external dependencies, its tests are compatible with third-party assertion libraries.

2. ReflectionAssert (part of Unitils) has stalled maintenance (discontinued in 2022). Projects migrate away from it to ensure long-term support.

3. AssertJ is a fluent assertions library compatible with most test frameworks. Beyond its syntax, it also ships with multiple unique, advanced assertions.

4. RSpec's expect syntax with the "change" matcher eliminates the need for multiple assertions (i.e., to validate pre- and post-conditions) by encapsulating state transitions in a single check. Therefore, it supports the principle of "one expectation per test".

5. JUnit 5 introduces subtle changes to JUnit 4's assertions set, including three new assertion types (assertThrows, assertTimeout, and assertTimeoutPreemptively), message parameter position (last parameter in JUnit 5), and support for lazily evaluated message parameters.

Qualities

  • Uniformity

Code Demonstration

BEFORE
assertEquals(expected, actual)
AFTER
assertThat(expected, is(equalTo(actual)))
BEFORE
ReflectionAssert.assertReflectionEquals(expected, actual)
AFTER
assertEquals(expected, actual)
BEFORE
assertEquals(message, expected, actual)
AFTER
assertThat(actual).withFailMessage(message).isEqualTo(expected)
BEFORE
assertEquals("Failure message", expected, actual)
AFTER
assertEquals(expected, actual, () -> "Failure message")

Migrate Parameterized Test

Migration Refactorings
Variants
1. TestNG to JUnit 52. JUnit 4 to JUnit 53. JUnit's Parameterize Test to JUnit 5 Test Template

Motivation

1. Both TestNG and JUnit 5 provide built-in support for test parameterization. However, TestNG relies exclusively on data provider methods, whereas JUnit 5 offers additional argument providers, e.g., external CSV files, inline CSV values, and null/empty values.

2. JUnit 4 includes test parameterization as an optional feature enabled through @RunWith(Parameterized.class), which restricts the use of other third-party runners, e.g., Mockito. Conversely, JUnit 5 supports parameterization by default, allows multiple argument providers within a single class and includes over ten provider types, such as MethodSource (equivalent to JUnit 4's only provider).

3. In JUnit 5, Parameterized test (and Repeated test) is the specialization of another feature, Test Template, which enables providing multiple invocation contexts to methods.

Qualities

  • Simplicity
  • Reusability

Code Demonstration

BEFORE
@Test(dataProvider = "data")
void t(P₁, ..., Pₙ) { }

@DataProvider
Object[][] data() {
    return new Object[][]{{
            { P₁₁, ..., P₁ₙ },
            ...
            { Pₘ₁, ..., Pₘₙ }
    }}
}
AFTER
@ParameterizedTest
@CsvSource({
    P₁₁, ..., P₁ₙ,
    ...
    Pₘ₁, ..., Pₘₙ
})
void t(P₁, ..., Pₙ) { }
BEFORE
@RunWith(Parameterized.class)
class LegacyTest {
    @Parameters
    public static Collection<Object[]> data() {
        return Arrays.asList(new Object[][]{{P}, ..., {Pₙ}})
    }
    public legacyTest(P₁, ..., Pₙ) { }
    @Test
    public void t() { }
}
AFTER
class ModernTest {
    @ParameterizedTest
    @MethodSource("data")
    void t(ParamObj P) { }

    public static StreamArguments data() {
        return Stream.of(
            Arguments.of(new ParamObj(P₁₁, ..., P₁ₙ)),
            ...
            Arguments.of(new ParamObj(Pₘ₁, ..., Pₘₙ))
        )
    }
}

Migrate Expected Exception

Migration Refactorings

Motivation

Using try-catch inside a test case to catch an expected exception can lead to unpredictable behavior, e.g., assertions inside the try-block may not execute if the exception is thrown earlier than expected.

Furthermore, the optional @Test annotation attribute expected indicates an exception should be thrown anywhere inside a method body, which leads to false positives due to the lack of fine-grained control over which specific statement should trigger the exception.

Moreover, in JUnit 4.7+, test execution is interrupted as soon as an exception is thrown even if a rule is set to expect an exception.

Therefore, JUnit 5 assertions (backported to JUnit 4.13) can catch exceptions thrown by specific statements while preserving the execution of remaining statements, allowing for post-exception verification.

Qualities

  • Simplicity

Code Demonstration

BEFORE
// Try-fail-catch Pattern
@Test
void t() {
    try {
        stmt
        fail(M)
    } catch (E) {
    }
    stmt'
}
AFTER
// JUnit4 Annotation Base
@Test(expected = E)
void t() {
    stmt
    stmt'
}

Replace Test Annotation between Class and Method

Migration Refactorings

Motivation

TestNG’s @Test annotation can either be used on test method or test classes. When used in test classes, public methods become test methods (i.e., they execute as if they had @Test annotation) and non-public methods become utility methods (i.e., they must be invoked to execute). Since using class test annotation is encouraged, migrations to or from TestNG often involve this refactoring.

Qualities

  • Simplicity
  • Uniformity

Code Demonstration

BEFORE
class C {  
    @Test  
    public void t1() {  } 
    
    @Test  
    public void t2() {  } 
    
    public void utility() { }
}  
AFTER
@Test  // Public methods are tests 
class C {  
    public void t1() { }
    
    public void t2() {  } 
    
    private void utility() { }
}  

Migrate Test Timeout

Migration Refactorings

Motivation

Unlike TestNG and JUnit 4, JUnit 5 provides a separate annotation to set up timeout for a test case. Developers may apply such a change either to tests already undergoing a migration or to adhere with JUnit 5's multiple annotation design. Such a design is more flexible as it promotes stacking multiple annotations and it enables projects to compose their annotations into one single custom annotation.

Qualities

  • Changeability
  • Uniformity

Code Demonstration

BEFORE
@Test(timeout = 5000)
void t() {}
AFTER
@Timeout(value = 5)
@Test
void t() {}

Replace Loop

Test Smell Refactorings
Variants
1. Replace Fixed Loop with Repeated Tests2. Replace For-each Loop with Parameterized Test

Motivation

1. When a test repeatedly evaluates the same assertion for a fixed number of times, the code can be simplified with @RepeatedTest.

2. When testing permutation of values with nested for-each loop cases, extracting a parameterized test guarantees each permutation can be tested and reported independently.

Qualities

  • Uniformity
  • Changeability

Code Demonstration

BEFORE
@Test  
void t() {  
    for (int i = 0; i < 5; i++) {  
        stmt  
        assertEquals(expected, actual) 
    }  
} 
AFTER
@RepeatedTest(5)  
void t() {  
    stmt  
    assertEquals(expected, actual)
}  

Sources

Mining DatasetsMonitoringStackOverflowLiterature (Soares et al., 2023)

Replace Mystery Guest with @TempDir Parameter

Test Smell Refactorings

Motivation

Using external resources (e.g., creating temporary files or writing to existing ones) in test code is known as a symptom of the Mystery Guest test smell. The consequences are test no longer being self-contained and resource use being risky and requiring Try-Finally statements for execution consistency.

Qualities

  • Independence

Code Demonstration

BEFORE
@Test
void t() {
    stmt
    File.createTempFile(params)
    stmt'
}
AFTER
@Test
void t(@TempDir File D) {
    stmt
    D.createTempFile(params)
    stmt'
}

Instances

Sources

Mining DatasetsMonitoringStackOverflowLiterature (Soares et al., 2023)

Solve Race Condition with Resource Lock

Test Smell Refactorings

Motivation

Running test cases concurrently can be a challenge when the test suite relies on external resources, such as environment variables and system properties. Using specialized test framework features or OS locks to synchronize resource acquisition can ease workaround race condition.

Qualities

  • Independence

Code Demonstration

BEFORE
@Execution(CONCURRENT)
Class C {
    @Test
    void t() { }
}
AFTER
@Execution(CONCURRENT)
Class C {
    @Test
    @ResourceLock(value=SYS_PROPS, mode=READ_WRITE)
    void t() { }
}

Sources

Mining DatasetsMonitoringStackOverflowLiterature (Soares et al., 2023)

Inline Resource

Test Smell Refactorings

Motivation

Tests that use or create external resources are not self-contained. To remove the dependency between a test method and some external resource, we incorporate that resource in the test code by setting up a fixture in the test code that holds the same contents as the resource.

Qualities

  • Uniformity
  • Changeability
  • Effectiveness

Code Demonstration

BEFORE
@BeforeEach
void setUp() {
    res = loadResource("external.txt")  
    value = res.read()
} 
AFTER
@BeforeEach  
void setUp() {  
    value = """  
        INLINED_DATA  
    """
}

Instances

No instances recorded.

Sources

Mining DatasetsMonitoringStackOverflowLiterature (Van Deursen et al., 2001)

Setup External Resource

Test Smell Refactorings

Motivation

Test code that makes optimistic assumptions about the existence (or absence) and state of external resources (such as particular directories or database tables) can cause non-deterministic behaviour in test outcomes. If it is necessary for a test to rely on external resources, such as directories, databases, or files, make sure the test that uses them explicitly creates or allocates these resources before testing, and releases them when done.

Qualities

  • Reliability

Code Demonstration

BEFORE
@Test  
void t() {  
    actual = processResource("data.csv")  
    assertEquals(expected, actual)  
}  
AFTER
@BeforeEach  
void setUp() {  
    createResource("data.csv") 
}  

@Test  
void t() {  
    actual = processResource("data.csv")  
    assertEquals(expected, actual)  
}  

@AfterEach  
void tearDown() {  
    deleteResource("data.csv")  
}  

Instances

No instances recorded.

Sources

Mining DatasetsMonitoringStackOverflowLiterature (Van Deursen et al., 2001)

Make Resource Unique

Test Smell Refactorings

Motivation

Problems originate from the use of overlapping resource names (Deursen et al., 2001), either between different tests run done by the same user or between simultaneous test runs done by different users. For example, tests writing to a shared file may overwrite each other’s data or crash due to concurrent access. Such problems can easily be prevented (or repaired) by using unique identifiers for all resources that are allocated, for example by including a timestamp.

Qualities

  • Reliability

Code Demonstration

BEFORE
@Test  
void t() {    
    r = createResource("shared-name.txt")  
}  
AFTER
class C {  
    private String uniqueResourceName  
    
    @BeforeEach  
    void setUp() {  
        uniqueResourceName = "resource-" + System.currentTimeMillis()  
        createResource(uniqueResourceName)  
    }  
    
    @Test  
    void t() {  
        actual = loadResource(uniqueResourceName) 
    }  
    
    @AfterEach  
    void tearDown() {  
        deleteResource(uniqueResourceName)  
    }  
}  

Instances

No instances recorded.

Sources

Mining DatasetsMonitoringStackOverflowLiterature (Van Deursen et al., 2001)

Introduce Equality Method

Test Smell Refactorings

Motivation

Test Methods with multiple assertions are hard to debug and may have its execution interrupted prematurely. When verifying an object’s state, one should compare it with an expected object rather than their properties separately. Otherwise, the number of assertions explodes impacting the readability.

Qualities

  • Changeability
  • Uniformity
  • Brittleness
  • Simplicity

Code Demonstration

BEFORE
@Test  
void t() {  
    actual = method()  
    assertEquals(expected1, actual.getProp1())  
    assertEquals(expected2, actual.getProp2())  
    assertEquals(expected3, actual.getProp3())  
}  
AFTER
class Result {
    @Override
    bool equals() { }
}

class C {
    @Test  
    void t() {  
        actual = sut.act()  
        assertEquals(expected, actual)  
    }  
}

Sources

Mining DatasetsMonitoringStackOverflowLiterature (Van Deursen et al., 2001)

Mock Time Input

Mocking Refactorings

Motivation

Testing software that relies on the clock time to operate is challenging, e.g., scheduled events, timer, and calendars. When testing, it is a good practice to mock the time to speed-up tests and avoid synchronization problems.

Qualities

  • Efficiency
  • Effectiveness

Code Demonstration

BEFORE
@Test
void t() {
    Cache cache = new Cache(expirationTime: 5, TimeUnit.SECONDS)
    cache.put("key", "value")
    Thread.sleep(5000)
    assertNull(cache.get("key"))
} 
AFTER
@Test
void t() {
    Clock mockClock = Clock.fixed(Instant.now(), ZoneId.systemDefault())
    Cache cache = new Cache(expirationTime: 5, TimeUnit.SECONDS, mockClock)
    cache.put("key", "value")
    
    Instant expiredTime = Instant.now(mockClock).plusSeconds(5)
    mockClock = Clock.fixed(expiredTime, ZoneId.systemDefault())
    cache.setClock(mockClock)
    
    assertNull(cache.get("key"))
} 

Sources

Mining DatasetsMonitoringStackOverflowLiterature (Aniche, 2022)

Replace Mocks with Real Instances

Mocking Refactorings

Motivation

Mocks are not real objects, thus they might behave differently to the objects they try to simulate and prevent tests from catching some bug types, i.e., bugs in the integration of different components. And the simpler a dependency is, fewer are the benefits one would get from mocking it. Besides, too much mocking can increase maintenance effort as reflecting production changes on mocks become more frequent. Replacing mock with real instance can be beneficial if the latter is performs satisfactorily.

Qualities

  • Uniformity

Code Demonstration

BEFORE
@Test  
void t() {  
    Dependency mockDep = mock(Dependency.class)  
    when(mockDep.method())
        .thenReturn(expected)    
    SUT sut = new SUT(mockDep)  
    sut.process()  
    verify(mockDep).method() 
}  
AFTER
@Test  
void t() {  
    Dependency realDep = new LightweightDependency() 
    realDep.configure(expected)    
    SUT sut = new SUT(realDep)  
    sut.process()  
    assertEquals(expected, sut.getResult())  
}  

Sources

Mining DatasetsMonitoringStackOverflowLiterature

Replace Real Instances with Mocks

Mocking Refactorings

Motivation

Compared to real objects, mocks offer four advantages (Aniche, 2022). Mocks are efficient, thus are good replacements for slow dependencies. Mocks are easy to write, which makes them appropriate solutions for objects of a class that can’t be created easily. Mocks are flexible, which enables simulating complex real world scenarios efficiently. Mocks isolate components, which can be particularly useful to prevent undesired access to database and external APIs.

Qualities

  • Independence
  • Efficiency
  • Reliability

Code Demonstration

BEFORE
@Test  
void t() {  
    CloudDatabase db = new CloudDatabase(config)  
    db.method()  
    SUT sut = new SUT(db)  
    Result result = sut.process()  
    assertEquals(expected, result)  
}  
AFTER
@Test  
void t() {  
    CloudDatabase mockDep = mock(CloudDatabase.class)  
    when(mockDep.method())
        .thenReturn(expected)    
    SUT sut = new SUT(mockDep)  
    sut.process()  
    verify(mockDep).method() 
}  

Reuse Mock

Mocking Refactorings

Motivation

When specific features are heavily depended on code bases, it is common that many unit tests require mocking it. One may extract a Fake or Stub class that is used across many tests in the project.

Qualities

  • Reusability
  • Changeability

Code Demonstration

BEFORE
@Test
void t() {
    Dependency dep = mock(Dependency.class)
    when(dep.method(args)).thenReturn(value)
    stmt
    verify(dep).method(args)
    stmt'
}
AFTER
@Test
void t() {
    Dependency dep = new FakeC(args, value)
    stmt
    verify(dep).method(args)
    stmt'
}

class FakeDependency extends Dependency {
    FakeDependency(args, value) {
        when(method(args)).thenReturn(value)
    }
}

Sources

Mining DatasetsMonitoringStackOverflowLiterature

Replace Inheritance by Mocking API

Mocking Refactorings

Motivation

Inheritance requires the developer to manually craft additional attributes/features in the subclass for tracking the execution of the mock objects. The logic behind the attributes is implicit, and may blur the testing logic.

Qualities

  • Simplicity

Code Demonstration

BEFORE
@Test
class T extends MockBaseClass {
    Dependency dep
    ClassUnderTest obj
    
    @Override
    void setUp() {
        super.setUp()
        dep = createDependencyMock()
        obj = new ClassUnderTest(dep)
    }
    
    void t() {
        stubDependencyBehavior(dep)
        obj.performAction()
        assertCustomTracking(dep)
    }
}
AFTER
@Test
class T {
    @Mock
    Dependency dep
    
    @InjectMocks
    ClassUnderTest obj
    
    @Before
    void setUp() {
        MockitoAnnotations.initMocks(this)
    }
    
    void t() {
        when(dep.specificMethod()).thenReturn(mockValue)
        obj.performAction()
        verify(dep).specificMethod()
    }
}

Instances

No instances recorded.

Sources

Mining DatasetsMonitoringStackOverflowLiterature (Wang et al., 2023)

Replace Mocking API with Anonymous Subclass

Mocking Refactorings

Motivation

Java's anonymous classes are commonly used for creating stubs. Wrapping them with mocking framework's Spies can be redundant, thus it might be worth dropping the mocking API and replace it with purely manual implementation of its logic. Like so developers may keep stubs' customizability while simplifying the test code and potentially dropping the dependency on mocking framework.

Qualities

  • Reusability

Code Demonstration

BEFORE
@Test
void t() {
    Dependency obj = MockingFramework.spy(new Dependency() {
        @Override
        ReturnType method(ParamType args) {
            methodCalled = true
            return value
        }
    })
    MockingFramework.when(obj.method(args)).thenReturn(value)
    stmt
    MockingFramework.verify(obj).method(args)
    stmt'
}
AFTER
@Test
void t() {
    Dependency obj = new Dependency() {
        public boolean methodCalled = false
        @Override
        ReturnType method(ParamType args) {
            methodCalled = true
            return value
        }
    }
    stmt
    assertTrue(obj.methodCalled)
    stmt'
}

Instances

Sources

Mining DatasetsMonitoringStackOverflowLiterature (Pereira and Hora, 2020)

Replace Test Double Type

Mocking Refactorings

Motivation

There are five different test doubles (i.e., dummy, stub, fake, spy, and mock). Dummy is the simplest instance to fill parameter lists. Stub is a test-only subclass overriding methods for behaviour customization. Fakes are working simplified implementations. Spies and Mocks provide mechanisms for behaviour verification.

Qualities

  • Efficiency
  • Simplicity

Code Demonstration

BEFORE
@Test  
void t() {  
    Dependency stub = new Dependency() {  
        @Override  
        Result method() { return expected } 
    }  
    SUT sut = new SUT(stub)  
    assertEquals(expected, sut.process())
}  
AFTER
@Test  
void t() {  
    when(mockDep.method())
        .thenReturn(expected) 
    SUT sut = new SUT(mockDep)  
    sut.process()  
    verify(mockDep).method()
}
BEFORE
SUT sut = new SUT(null)
AFTER
class FakeDependency implements Dependency {  
    void method() { }  
}  
SUT sut = new SUT(new FakeDependency()) 
BEFORE
class SpyDependency extends RealDependency {  
    boolean called = false  
    @Override  
    void method() {  
        called = true  
        super.method()  
    }  
}  
AFTER
@Spy RealDependency spyDep  

@Test
void t() {
    verify(spyDep).method()
}

Instances

Sources

Mining DatasetsMonitoringStackOverflowLiterature (Aniche, 2022)

Replace Test Double with ObjectMother

Mocking Refactorings

Motivation

When we have many tests using similar but not identical mocks, we need to recreate the mock for each test method. Alternatively, we can create an Object Mother to simplify tests and centralize complex object construction. Object Mother is a Test Utility Class whose methods provide specific versions of complex type instances required by multiple tests.

Qualities

  • Uniformity
  • Efficiency

Code Demonstration

BEFORE
@Test  
void t1() {  
    Dependency mockDep = mock(Dependency.class)  
    when(mockDep.method())
        .thenReturn(expected)    
    SUT sut = new SUT(mockDep)  
    assertEquals(expected, sut.process())  
}  

@Test  
void t2() {  
    Dependency mockDep = mock(Dependency.class)  
    when(mockDep.method())
        .thenReturn(expected') 
    SUT sut = new SUT(mockDep)  
    assertEquals(expected', sut.analyze())    
}  
AFTER
class DependencyMother {  
    static Dependency createConfiguredDependency() {  
        Dependency dep = new Dependency() 
        return dep  
    }  
}  

@Test  
void t1() {  
    SUT sut = new SUT(DependencyMother.createConfiguredDependency())  
    assertEquals(expected, sut.process())  
}  

@Test  
void t2() {  
    SUT sut = new SUT(DependencyMother.createConfiguredDependency())  
    assertEquals(expected', sut.analyze())  
}  

Instances

Sources

Mining DatasetsMonitoringStackOverflowLiterature (Fowler, 2006; Meszaros, 2007)

Standardize Test Utilities for Project-Wide Reuse

Test Suite Refactorings
Variants
1. Generalize Utility Class2. Export Utility Methods as in-house Test Framework3. Reuse Fixtures across Tests Project-wide4. Reuse Cached Spring Context across Tests Project-wide5. Reuse Cached Database Query Results

Motivation

Test utility classes often become rigid or isolated. Standardizing them promotes reusability, efficiency, and consistency across the entire project or organization.

1. Test utility classes may have methods with strict API, which hinders reuse for new test cases. Developers may generalize utility methods' return type and parameters set by prioritizing interfaces or super classes over concrete classes.

2. Some large communities and organizations take one step further creating an in-house Test Framework to share common test logic among many of their projects.

3. By organizing repetitive, shared setup and teardown logic in a global scope, developers can avoid duplicating test fixtures across multiple files.

4. Reusing cached Spring contexts across tests ensures that contexts with identical configurations are loaded only once, thus avoiding the repeated initialization of the application context.

5. To simplify test data setup for complex scenarios while maintaining test isolation, real database interactions can be recorded during an initial execution and subsequently replayed in later test runs.

Qualities

  • Reusability
  • Efficiency

Code Demonstration

BEFORE
class UtilsClass {
    static boolean helper(ArrayList<?> p) {  }
}

class C {
    void t() {
        var sut = new SUT()
        var collection = sut.getSet()
        var list = convertToList(collection)
        assertTrue(UtilsClass.helper(list))
    }
}
AFTER
class UtilsClass {
    static boolean helper(Collection<?> p) {  }
}
    
class C {
    void t() {
        var sut = new SUT()
        var collection = sut.getSet()
        assertTrue(UtilsClass.helper(collection))
    }
}
BEFORE
class ProjectUtilsClass {
    static void validateResult(Result r) 
                { }
}

class C {
    void t() {
        ProjectUtilsClass.validateResult(r)
    }
}
AFTER
package com.inhouse.framework
class TestAssertions {
    static void validateResult(Result r) { }
}

class C {
    void t() {
        com.inhouse.framework.TestAssertions.validateResult(r)
    }
}
BEFORE
class C {
    @BeforeEach
    void setUp() { 
        stmt
    }
}

class C2 {
    @BeforeEach
    void setUp() { 
        stmt
    }
}
AFTER
class GlobalTestFixture {
    @BeforeAll
    static void globalSetup() {  }
}

@ExtendWith(GlobalTestFixture.class)
class C {  }

@ExtendWith(GlobalTestFixture.class)
class C1 {  }
BEFORE
@ContextConfiguration(classes="classpath:1.xml")
class C1 { }
@ContextConfiguration(classes="classpath:2.xml")
class C2 { }
AFTER
@ContextConfiguration(classes="classpath:C.xml")
class C1 { }
@ContextConfiguration(classes="classpath:C.xml")
class C2 { }
BEFORE
Result r = database.query("SELECT ...")
assertEquals(expected, r.hasData())
AFTER
Result r = DatabaseReplayer.replay("SELECT ...")
assertEquals(expected, r.hasData())

Custom Runner

Test Suite Refactorings

Motivation

Unlike JUnit 5's extension model, only one JUnit 4 Test Runner can be applied to a Test Class at once. There are two strategies one can adopt to further customize the Test Framework behaviour for a Test Class that already relies on special Test Runners: using JUnit 4 rules or extracting a subclass of the special Test Runner.

Qualities

  • Reusability
  • Changeability

Code Demonstration

BEFORE
@RunWith(SpringJUnit4ClassRunner.class)  
@ContextConfiguration(classes = {
    DatabaseConfig.class
})  

public class C {  
    @Test  
    public void t() {  }  
}  
AFTER
@RunWith(CustomIntegrationTestRunner.class)  
public class C {  
    @Test  
    public void t() {  }  
}  

public class CustomIntegrationTestRunner extends SpringJUnit4ClassRunner {  
    public CustomIntegrationTestRunner(Class<?> clazz) 
        throws InitializationError {  
        super(clazz)
    }  

    @Override  
    protected TestContextManager createTestContextManager(Class<?> clazz) {  
        return new TestContextManager(clazz) {  
            @Override  
            public void prepareTestInstance(Object testInstance) {  
                registerConfigurationClasses(DatabaseConfig.class) 
                super.prepareTestInstance(testInstance)
            }  
        }
    }  
}  

Custom Annotation

Test Suite Refactorings

Motivation

Custom annotations are often employed to temporarily tag test method for third-party library integration (e.g., Architectural test), to customize both test and production code with the same JUnit-like, annotation-based "extension model," or to attach metadata to tests (e.g., issue fixed and method-under-test).

Qualities

  • Uniformity

Code Demonstration

BEFORE
@Test
@Timeout(value = 5, unit = TimeUnit.SECONDS)
@Tag("integration")
void t() {    
    stmt
}
AFTER
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
@Test
@Timeout(value = 5, unit = TimeUnit.SECONDS)
@Tag("integration")
public @interface IntegrationTest {}

@IntegrationTest
void t() {
    stmt
}

Sources

Mining DatasetsMonitoringStackOverflowLiterature (Kim et al., 2021)

Restructure Test Suite

Test Suite Refactorings
Variants
1. Nest Test Classes2. Move Test Method to Another Class3. Denest Test Classes

Motivation

1. Nested test classes are best used when tests need to be organized into logical, hierarchical groups.

2. Test methods are moved to other classes to reflect changes in the production code or as part of a systematic test suite reorganization.

3. When nested classes evolves beyond the scope of their outer class, one may denest the outer class by moving its nested class to their own separate files.

Qualities

  • Uniformity

Code Demonstration

BEFORE
class FlatTest {  
    @BeforeEach  
    void setUp() {  
        setUp 
    }  
    
    @Test  
    void T1() {  
        setUp'  
        assert(E1)  
    }  
    
    @Test  
    void T2() {  
        setUp''
        assert(E2)  
    }  
}  
AFTER
class OuterTest {  
    @BeforeEach  
    void setUp() {  
        setUp
    }  
    
    @Nested  
    class C1 {  
        @BeforeEach  
        void setUp'() { }  
        @Test  
        void T1() {  
            assertEquals(expected, actual)    
        }  
    }  
    
    @Nested  
    class C2 {  
        @BeforeEach  
        void setUp''() { }  
        @Test  
        void T2() {  
            assertEquals(expected, actual'')  
        }  
    }  
}  
BEFORE
class C1 {  
    @Test  
    void t() {  
        assertEquals(expected, ClassB.method())  
    }  
}  

class C2 { }
AFTER
class C1 { }

class C2 {  
    @Test  
    void t() {  
        assertEquals(expected, ClassB.method())  
    }  
}  
BEFORE
class OuterTest {  
    @Nested  
    class C {  }  
}  
AFTER
class OuterTest {  }  
class C {  }  

Extend Framework

Test Suite Refactorings
Variants
1. Test Extension2. Enforce Architectural Rules

Motivation

1. JUnit 5 supports an extension model and JUnit 4 supports rule-based extension. Both features prioritize composition over inheritance.

2. ArchUnit allows teams to define architectural rules in Java, creating a shared understanding of code structure that adapts as projects evolve.

Qualities

  • Uniformity
  • Reusability
  • Changeability

Code Demonstration

BEFORE
@Test  
void t() {  
    Resource res = acquireResource()  
    try {  
        actual = res.method()
        assertEquals(expected, actual)  
    } finally {  
        releaseResource(res)  
    }  
}    
AFTER
@ExtendWith(ResourceExtension.class)  
@Test  
void t(Resource res) {
    actual = res.method()
    assertEquals(expected, actual) 
}    

class ResourceExtension implements BeforeEachCallback, AfterEachCallback {  
    public void beforeEach(ExtensionContext ctx) {  
        ctx.getStore().put("resource", acquireResource())  
    }  
    
    public void afterEach(ExtensionContext ctx) {  
        releaseResource(ctx.getStore().get("resource", Resource.class))  
    }  
}  
BEFORE
@Test
void t() {
    var violatingFiles = Files.walk(Paths.get("src"))
        .filter(p -> p.endsWith(".java"))
        .flatMap(p -> Files.readAllLines(p))
        .filter(s -> s.contains("package" + basePackage) && s.contains("import" + forbiddenImport))
        .collect(toList())
    
    assertFalse(violatingFiles.isEmpty())
}
AFTER
@ArchTest
void t() {
    noClasses()
        .that().resideOutsideOfPackage(basePackage)
        .should().accessClassesThat()
        .resideInAnyPackage(forbiddenImport)
}

Refactoring Catalogue System © 2026 • Built with SvelteKit