WARNING: THIS SITE IS A MIRROR OF GITHUB.COM / IT CANNOT LOGIN OR REGISTER ACCOUNTS / THE CONTENTS ARE PROVIDED AS-IS / THIS SITE ASSUMES NO RESPONSIBILITY FOR ANY DISPLAYED CONTENT OR LINKS / IF YOU FOUND SOMETHING MAY NOT GOOD FOR EVERYONE, CONTACT ADMIN AT ilovescratch@foxmail.com
Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 25 additions & 0 deletions bin/asynct
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
#! /usr/bin/env node

try {
// always check for a local copy of async_testing first
var testing = require('async_testing');
}
catch(err) {
if( err.message == "Cannot find module './async_testing'" ) {
// look in the path for async_testing
var testing = require('async_testing');
}
else {
throw err;
}
}

exports.test = function (test){
test.ok(false,"this should not be called!")
}
process.ARGV.shift() //node
process.ARGV.shift() // this file... if i leave this in it tried to run this file as a test, which goes into infinite loop and doesn't exit.
process.ARGV.unshift('node')

testing.run(process.ARGV);

32 changes: 32 additions & 0 deletions bin/node-async-test.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
#! /usr/bin/env node

try {
// always check for a local copy of async_testing first
var testing = require('async_testing');
}
catch(err) {
if( err.message == "Cannot find module './async_testing'" ) {
// look in the path for async_testing
var testing = require('async_testing');
}
else {
throw err;
}
}

testing.run(null, process.ARGV, done);

function done(allResults) {
// we want to have our exit status be the number of problems

var problems = 0;

for(var i = 0; i < allResults.length; i++) {
if (allResults[i].tests.length > 0) {
problems += allResults[i].numErrors;
problems += allResults[i].numFailures;
}
}

process.exit(problems);
}
94 changes: 94 additions & 0 deletions docs/api.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
<h1>module: async_testing</h1>

<h2>method: runSuite (testSuite,opts)</h2>

<p><code>testSuite</code> is a test module for example <code>runSuite (require('./test/simpleTest'),opts)</code>
each property in the object should be a test. A test is just a method which takes one argument <code>(test)</code>
and then make assertions by calling <code>test.ok(true)</code> etc. and eventually <code>test.finish()</code>
making an assertion after <code>test.finish()</code> or calling <code>test.finish()</code> twice results in a
testAlreadyFinished error. not calling <code>test.finish()</code> at all is a an error as well. (see onSuiteDone({'0','exit',...)</p>

<p>Available configuration options:
*
+ parallel: boolean, for whether or not the tests should be run in parallel or serially. Obviously, parallel is faster, but it doesn't give as accurate error reporting
+testName: string or array of strings, the name of a test to be ran
+name: string, the name of the suite being ran
+ onTestStart
+ onTestDone
+ onSuiteDone</p>

<p>example:</p>

<pre>
{ name: 'string'
, testName: [array of test names to run]
, onTestStart: function (test) {}
, onTestDone: function (status,test) {}
, onSuiteDone: function (status,report) {}
}
</pre>

<h3>callback arguments: onSuiteDone (status,report)</h3>

<p>status may be:</p>

<ul>
<li><em>complete</em> : valid result, success or failure</li>
<li><em>exit</em> : some tests did not call <code>test.finish()</code></li>
<li><em>loadError</em> : an error occured while loading the test, i.e. a syntax error</li>
<li><em>error</em> : the test threw an error. </li>
</ul>

<p>currently the report differs for each status</p>

<ul>
<li>complete </li>
</ul>

<pre>
{tests: //list of tests
[
{ name: [name of test]
, numAssertions: [number of Assertions in test]
, failure: [error which caused failure] // only if this test failed, or errored.
, failureType: ['assertion' or 'error']
}
]
}
</pre>

<ul>
<li>exit [list of tests which did not finish]</li>
<li>loadError [error message (string)]</li>
<li>error </li>
</ul>

<pre>
{ err: errorObject
, tests: [list of names of tests which where running when err error occured]
}
//usually an error is caught by the test and it's registered as a failure.
//sometimes a test throws an error asyncronously, and async_testing doesn't
//know which test it came from.
</pre>

<h3>callback arguments: onTestStart (test)</h3>

<ul>
<li>test: name of the test which has started.</li>
</ul>

<h3>callback arguments: onTestDone (status,test)</h3>

<ul>
<li>status : 'success', or 'failure'</li>
<li>report:</li>
</ul>

<pre>
{ name: [name of test]
, numAssertions: [number of Assertions in test]
, failure: [error which caused failure] // only if this test failed, or errored.
, failureType: ['assertion' or 'error']
}
</pre>
94 changes: 94 additions & 0 deletions docs/api.markdown
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
module: async_testing
=====================


## method: runSuite (testSuite,opts)

`testSuite` is a test module for example `runSuite (require('./test/simpleTest'),opts)`
each property in the object should be a test. A test is just a method which takes one argument `(test)`
and then make assertions by calling `test.ok(true)` etc. and eventually `test.finish()`
making an assertion after `test.finish()` or calling `test.finish()` twice results in a
testAlreadyFinished error. not calling `test.finish()` at all is a an error as well. (see onSuiteDone({'0','exit',...)

Available configuration options:
*
+ parallel: boolean, for whether or not the tests should be run in parallel or serially. Obviously, parallel is faster, but it doesn't give as accurate error reporting
+testName: string or array of strings, the name of a test to be ran
+name: string, the name of the suite being ran
+ onTestStart
+ onTestDone
+ onSuiteDone

example:

<pre>
{ name: 'string'
, testName: [array of test names to run]
, onTestStart: function (test) {}
, onTestDone: function (status,test) {}
, onSuiteDone: function (status,report) {}
}
</pre>

###callback arguments: onSuiteDone (status,report)

status may be:

+ _complete_ : valid result, success or failure
+ _exit_ : some tests did not call `test.finish()`
+ _loadError_ : an error occured while loading the test, i.e. a syntax error
+ _error_ : the test threw an error.

currently the report differs for each status

+ complete

<pre>
{tests: //list of tests
[
{ name: [name of test]
, numAssertions: [number of Assertions in test]
, failure: [error which caused failure] // only if this test failed, or errored.
, failureType: ['assertion' or 'error']
}
]
}
</pre>

+ exit [list of tests which did not finish]
+ loadError [error message (string)]
+ error

<pre>
{ err: errorObject
, tests: [list of names of tests which where running when err error occured]
}
//usually an error is caught by the test and it's registered as a failure.
//sometimes a test throws an error asyncronously, and async_testing doesn't
//know which test it came from.
</pre>

###callback arguments: onTestStart (test)

+ test: name of the test which has started.

###callback arguments: onTestDone (status,test)
+ status : 'success', or 'failure'
+ report:

<pre>
{ name: [name of test]
, numAssertions: [number of Assertions in test]
, failure: [error which caused failure] // only if this test failed, or errored.
, failureType: ['assertion' or 'error']
}
</pre>


## method: runFile (modulepath,opts)
module is the path to the test suite to run. run a test in a child process, opts and callbacks are same as for runSuite.





5 changes: 5 additions & 0 deletions lib/asynct_adapter.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@

exports.runTest = function (file,callbacks){
test = require(file)
require('async_testing').runSuite(test,callbacks)
}
Loading