Forum: Installation

Post: problem building source (ant check) on Ubuntu 11.04

problem building source (ant check) on Ubuntu 11.04
CraigHarris
Aug 24, 2011
Has anyone encountered this error when building server from source during "ant check":


[exec] eecheck:
[exec] [exec] make: Entering directory `/home/craig/voltdb/trunk/obj/release'
[exec] [exec] make: Nothing to be done for `main'.
[exec] [exec] make: Leaving directory `/home/craig/voltdb/trunk/obj/release'
[exec] [exec] make: Entering directory `/home/craig/voltdb/trunk/obj/release'
[exec] [exec] make: Nothing to be done for `test'.
[exec] [exec] make: Leaving directory `/home/craig/voltdb/trunk/obj/release'
[exec] [exec] Detected 4 hardware threads to use during the build
[exec] [exec] Make returned: 0
[exec] [exec] Traceback (most recent call last):
[exec] [exec] File "build.py", line 438, in
[exec] [exec] retval = runTests(CTX)
[exec] [exec] File "/home/craig/voltdb/trunk/buildtools.py", line 339, in runTests
[exec] [exec] process = Popen(executable="valgrind", args=["valgrind", "--leak-check=full", "--show-reachable=yes", "--error-exitcode=-1", targetpath], stderr=PIPE, bufsize=-1)
[exec] [exec] File "/usr/lib/python2.7/subprocess.py", line 672, in __init__
[exec] [exec] errread, errwrite)
[exec] [exec] File "/usr/lib/python2.7/subprocess.py", line 1213, in _execute_child
[exec] [exec] raise child_exception
[exec] [exec] OSError: [Errno 2] No such file or directory
[exec]
[exec] BUILD FAILED
Yes.
jhugg
Aug 24, 2011
Sorry. On Linux, our build system requires Valgrind to be installed for certain tests, even if you don't plan to run those tests. We have a to-do item to make the error a lot more understandable, but the good news is that it's an easy fix.

Run "sudo apt-get install valgrind" or "sudo yum install valgrind", depending on your distro and then it should work.
That worked
CraigHarris
Aug 24, 2011
WhooHOOO! Thank you! Zoomed past that issue and am now deep in the land of junit regression suite testing.
TestMultiPartitionSuite failure during ant check
CraigHarris
Aug 25, 2011
Has any one seen this? Are there logs I should be looking at for details?

[exec] [junit] Test org.voltdb.regressionsuites.TestMultiPartitionSuite FAILED (crashed)
[exec] [junit] Running org.voltdb.regressionsuites.TestOrderBySuite
[exec] [junit] Tests run: 40, Failures: 0, Errors: 0, Time elapsed: 273.97 sec
[exec] [junit] Running org.voltdb.regressionsuites.TestPartitionDetection
[exec] [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 20.26 sec
[exec] [junit] External VoltDB process terminated abnormally with return: 255
[exec] [junit] Running org.voltdb.regressionsuites.TestPlansGroupBySuite
[exec] [junit] Tests run: 42, Failures: 0, Errors: 0, Time elapsed: 285.33 sec
[exec] [junit] Running org.voltdb.regressionsuites.TestReplicatedSaveRestoreSysprocSuite
[exec] [junit] Tests run: 15, Failures: 0, Errors: 0, Time elapsed: 41.27 sec
[exec] [junit] Running org.voltdb.regressionsuites.TestReplicationSuite
[exec] [junit] Running org.voltdb.regressionsuites.TestReplicationSuite
[exec] [junit] org.voltdb.regressionsuites.TestReplicationSuite:testSinglePartitionInsert-localCluster-3-4-JNI had an error.
[exec] [junit] Tests run: 0, Failures: 0, Errors: 1, Time elapsed: 0.00 sec
[exec] [junit] Test org.voltdb.regressionsuites.TestReplicationSuite FAILED (crashed)
There's some stuff you can do
jhugg
Aug 25, 2011
Has any one seen this? Are there logs I should be looking at for details?

[exec] [junit] Test org.voltdb.regressionsuites.TestMultiPartitionSuite FAILED (crashed)
[exec] [junit] Running org.voltdb.regressionsuites.TestOrderBySuite
[exec] [junit] Tests run: 40, Failures: 0, Errors: 0, Time elapsed: 273.97 sec
[exec] [junit] Running org.voltdb.regressionsuites.TestPartitionDetection
[exec] [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 20.26 sec
[exec] [junit] External VoltDB process terminated abnormally with return: 255
[exec] [junit] Running org.voltdb.regressionsuites.TestPlansGroupBySuite
[exec] [junit] Tests run: 42, Failures: 0, Errors: 0, Time elapsed: 285.33 sec
[exec] [junit] Running org.voltdb.regressionsuites.TestReplicatedSaveRestoreSysprocSuite
[exec] [junit] Tests run: 15, Failures: 0, Errors: 0, Time elapsed: 41.27 sec
[exec] [junit] Running org.voltdb.regressionsuites.TestReplicationSuite
[exec] [junit] Running org.voltdb.regressionsuites.TestReplicationSuite
[exec] [junit] org.voltdb.regressionsuites.TestReplicationSuite:testSinglePartitionInsert-localCluster-3-4-JNI had an error.
[exec] [junit] Tests run: 0, Failures: 0, Errors: 1, Time elapsed: 0.00 sec
[exec] [junit] Test org.voltdb.regressionsuites.TestReplicationSuite FAILED (crashed)


As background, some of our tests push VoltDB a bit harder than it was designed to be pushed. For example, many of the org.regressionsuites.*Suite tests run multiple instances of VoltDB on a single machine, opening up VoltDB to error conditions that are difficult to hit under a normal deployment. For example, messages may take a long time to be sent between instances of VoltDB if an instance isn't scheduled on a cpu for a long time. Running on machines with less ram and/or cpus can exacerbate these issues. While we still get a lot of value out of these tests, we're not thrilled with the status quo and are constantly trying to make the tests pass more consistently.

As for stuff you can do to find out what went wrong for a failed test:

1. There may be a "junit-noframes.html" file in /obj/release/testoutput that has a summary of Junit tests (if all tests complete).
2. A good thing to do is run the test by itself, which will print output on the console. JUnit has a nasty tendency to hide this output. To run a standalone test, use the "junitclass" ant target like so:
"ant junitclass -Djunitclass=TestReplicationSuite"
3. You can run in debug mode by adding "-Dbuild=debug" to any ant task. This isn't all that different, but will enable assertions in C++ and you may get better C++ stack traces if the system crashes there (as opposed to Java).
3. The tests that spawn processes save their output to /obj/release/testoutput. You can read these files and look for errors printed to the console. If you run in debug mode, the release folder will be the debug folder in the path given.

We might have additional info if you let us know what kind of system you're running on as well as what SVN revision and branch you're running. Not all tests always pass on head, but we strive to make them passing for our releases.
Excellent Info
CraigHarris
Aug 25, 2011
As background, some of our tests push VoltDB a bit harder than it was designed to be pushed. For example, many of the org.regressionsuites.*Suite tests run multiple instances of VoltDB on a single machine, opening up VoltDB to error conditions that are difficult to hit under a normal deployment. For example, messages may take a long time to be sent between instances of VoltDB if an instance isn't scheduled on a cpu for a long time. Running on machines with less ram and/or cpus can exacerbate these issues. While we still get a lot of value out of these tests, we're not thrilled with the status quo and are constantly trying to make the tests pass more consistently.


As for stuff you can do to find out what went wrong for a failed test:
1. There may be a "junit-noframes.html" file in /obj/release/testoutput that has a summary of Junit tests (if all tests complete).
2. A good thing to do is run the test by itself, which will print output on the console. JUnit has a nasty tendency to hide this output. To run a standalone test, use the "junitclass" ant target like so:
"ant junitclass -Djunitclass=TestReplicationSuite"
3. You can run in debug mode by adding "-Dbuild=debug" to any ant task. This isn't all that different, but will enable assertions in C++ and you may get better C++ stack traces if the system crashes there (as opposed to Java).
3. The tests that spawn processes save their output to /obj/release/testoutput. You can read these files and look for errors printed to the console. If you run in debug mode, the release folder will be the debug folder in the path given.

We might have additional info if you let us know what kind of system you're running on as well as what SVN revision and branch you're running. Not all tests always pass on head, but we strive to make them passing for our releases.


Thank you!