Without setting dependent plug-ins to minimum version to match the
target platform we are aiming for we can imply (and therefore let install)
CDT into older versions of Eclipse where CDT does not actually work.
This can be exposed in very odd ways, such as IllegalAccessError, when
platform has allowed API changes.
However, rather than update every single bundle in CDT, only the
o.e.cdt.core/ui bundles are being updated as this should achieve the
desired result without every other bundle needing to be touched.
See Bug 536448
Part of #77
Because of
[changes](https://www.eclipse.org/eclipse/news/4.26/platform_isv.php#JobManager_Implementation)
in Eclipse Platform where the jobmanager's behaviour changes (within
the API), the consumers of the jobmanager can deadlock due to incorrect
assumptions.
In particular, where we call job.schedule(), the callback can happen
in different threads to the IJobChangeListener's. As CDT was holding
a lock while calling schedule that is also required in those
listeners, we need to no longer lock when calling schedule.
As the code already dealt with the case of when there was a delay
between the job.schedule() and where & when it was run, we can
move the schedule call out of the synchronized block.
Fixes#81
This change adds the ALL_FLAGS that does not limit tool options to
those declared as IOption::isForScannerDiscovery when launching the
compiler to discover compiler built-ins.
This is needed as many other flags, either entered manually in "Other
flags" or some of the existing flags with checkboxes such as "-ansi",
"-fPIC", and "-fstack-protector-all" which all affect scanner discovery
as they can all change what macros are built-in to the compiler.
The current solution has as a drawback that some settings, like -I and -D
then appear twice. For example in the "Includes" node in the "Project
Explorer"
My only reservation about this change is if there is an option
that can be specified successfully at build time, but when used
at scanner discovery time causes the compiler to fail, or return
incorrect results. Therefore I have added a new field,
excludeFromScannerDiscovery to tool options (buildDefinitions
extension point) that allows tool integrators to always exclude
a command line option from ALL_FLAGS. I have also added
a new "Other flags (excluded from discovery)" to the
"Miscellaneous" tab to allow compiler options to be entered
by the user.
Removed ICBuildConfiguration.getBinaryParserId() and
IToolChain.getBinaryParserId(). Replaced with methods that return a list
of IDs.
Updated API changes doc.
Rearranged tests so that the test for IToolChain is in a new gcc test
plugin.
Update since tags to 8.0.
Remove api filter.
Fix other since tags after removing of api filter.
Remove interface defaults.
Add default implementations where necessary.
Update tests - TBC.
Added ability to return a list of binary parser IDs, rather than a
single ID. This supports build configurations that have multiple
binaries with for example cross toolchains.
Change-Id: I1b7e47bf6a86bbd9f1c6b9646d008bac9479417d
This change solves the indexing of C/C++ files when multiple
toolchains are used in a single Makefile. This is for the use case in
which one (Linux) gcc compiler plus one or more custom embedded C
compilers (all producing ELF format binaries) are used.
To get proper indexing we need to know for each resource which
toolchain was used. The sub build configuration (via extension point
org.eclipse.cdt.core.buildConfigProvider) extends
StandardBuildConfiguration.java, and overrides method IToolChain
(List<String> commandgetToolChain). tcMap is filled with a map of
toolchains per resource. The primary toolchain keeps pointing to the
gcc toolchain.
Note that FileBasedErrorParserTests had to change because of some
Tycho incompatibility with JUnit's ParameterizedTest. It works
in the IDE, but not in maven.
The correct fix is to resolve the tycho settings, see Bug 569949
for a previous example. It may also be simply resolved by updating
to Tycho 3.0.0. However I want to get this change in as
at the moment CDT.setup is broken and that is impeding developers.
We can fail to regain our lock in other cases than just
operation cancelled exception, so capture all of those
cases to throw an FailedToReAcquireLockException.
Also, fix another finally block that assumes it has the lock.
Fixes#128
These binary parsers have been slated for deleting for
a while and are replaced with 64-bit compatible
versions.
Some methods still refereneced the 32-bit variants
and have been updated to the fully functioning
64-bit variant.
The older parser IDs are preserved (forever?) so that
old projects can be opened without needing to do anything.
The IDs now point at the new implementations.
See also Bug 562495
This extension has been in existence since very early days
of CDT, but it never(?) had a schema, so the improvements
done in #136 will now show errors in use of this extension.
Steps:
======
1. Create a managed project and build it
2. Expand the built binary available in binary container in project explorer view
3. Now clean the project, clean will fail irrespective of number of tries you do
Reason:
=======
For finding the sources for binary, Elf instance is created and Section.mapSectionData creates MappedByteBuffer of channel which locks the file on Windows until its garbage collected, see following
https://bugs.java.com/bugdatabase/view_bug.do?bug_id=4715154
Solution:
=========
Made ISymbolReader AutoCloseable and user is responsible to properly close it. In case of dwarf reader, we remove all the references of ByteBuffer and call gc.
The indexer has a feature that allows readers of the index
to read the index in the middle of write operations. This
is done by using a YeildableIndexLock.
The YeildableIndexLock's yield method can be called to
temporarily give up the write lock. However the assumption
in the code was that it would always successfully
reaquire the lock after that.
However, if the indexing was cancelled the lock would
fail to be reaquired. Therefore the code that thinks it
owns the lock no longer owns it. In this case the code
in PDOMWriter.storeSymbolsInIndex's finally block.
Therefore I have added an new exception type to explicitly
identify this use case so the original code can differentiate
between cases where an exception was thrown where the lock
is still held, and cases where the lock is no longer held.
Note that instead of a new exception caught like this:
```java
} catch (FailedToReAcquireLockException e) {
hasLock = false;
e.reThrow();
```
I could have done this:
```java
} catch (InterruptedException | OperationCanceledException e) {
hasLock = false;
throw e;
```
But it is not obvious that nothing else other than the
acquire can raise an OperationCanceledException because it
is a RuntimeException. By having a new checked exception we
can know for sure that in the finally block we have lost
our lock.
There are no API implications of this change as all the classes
and interfaces are internal to CDT.
Fixes#128
The format of this error message used to look like:
```java
Expected number (0) of Non-OK status objects in log differs from actual (1).
Error while parsing /projC_testTripleDownwardV/h3.h. java.lang.reflect.InvocationTargetException
Caused by: java.lang.reflect.InvocationTargetException
Caused by: java.lang.AssertionError: Need to hold a write lock to clear result caches
```
and it was hard to impossible to identify what the cause is.
The hope is that capturing this fuller stack trace into the log
will make it easier to identify the problem.
Part of #117
This helps add some isolation between tests in case background
threads are accessing a project. However I am not sure
this solves any of the actual outstanding flaky tests.
Part of #117
While it may be that the tests don't directly rely
on JUnit5, the IDE requires JUnit5 in the classpath
or else the launch config doesn't work with this error:
Cannot find class 'org.junit.platform.commons.annotation.Testable'
on project build path.
Part of #117
JDT thinks this is a test and will run it in the IDE and display
an error. But it is only used to compose other tests, by making
it abstract the IDE won't see it anymore.
Part of #117
This is an ancient (2004) test that does not apply, was never
referenced in the Suites and whose name doesn't match standard
pattern.
It tried to import ancient versions of projects as well, which
would kick off the project converter UI that can't be disabled
from the core plug-in.
All in all, this test adds nothing of value.
Part of #117
This code seems to be trying to optimize across tests.
This change isolated each individual test better.
Also removed is a bunch of effectively unused test code.
Part of #117
Some of these tests left behind projects, by chaning them
to extend BaseTestCase5 the resource cleanup happens
and the tests are cleaned up properly
Part of #117
Ideally the code itself should also be deleted from CDT, but
this test is super flaky and I cannot seem to convert it to
JUnit5 so I can properly mark it as flaky. Therefore
the test is now simply gone.
Part of #117
The resource helper is widely used, but when it deletes
projects, it leaves their contents on disk. Fix this
so that the contents on disk is deleted.
Part of #117
Maybe once upon a time this lifecycle did something,
but now in setUp fProject is always null and therefore
the project was never getting deleted as the fProject
that deleteProject saw was different than
the tests.
Part of #117
Having the test suites means that tests run multiple times
when running in the UI. Most suites just ran the tests in
that package, so their value, especially with the transition
to JUnit5 is minimal.
Note that the suites are not used when running build
with Tycho/Maven
Part of #117
Warning in build.properties will be errors when they run
in the tycho build, like this:
```
Error: Failed to execute goal org.eclipse.tycho:tycho-packaging-plugin:2.7.5:package-plugin
(default-package-plugin) on project org.eclipse.cdt.core.tests:
/home/runner/work/cdt/cdt/core/org.eclipse.cdt.core.tests/build.properties:
bin.includes value(s) [test.xml] do not match any files. -> [Help 1]
```
So make them errors in the workspace so that the issue is
detected before push.
Some build.properties issues don't affect the build, but
are still indicative of a problem.
If a .cproject references a binary parser ID that is not in
the plug-in XML, or in the XML, but marked as private, the
UI cannot display the binary parsers and was raising an
ArrayIndexOutOfBoundsException as below.
This fix rewrites the array handling using collections.
```java
!ENTRY org.eclipse.ui 4 0 2022-11-04 09:44:27.409
!MESSAGE Unhandled event loop exception
!STACK 0
java.lang.ArrayIndexOutOfBoundsException: Index 7 out of bounds for length 7
at org.eclipse.cdt.ui.newui.BinaryParsTab.updateData(BinaryParsTab.java:253)
at org.eclipse.cdt.ui.newui.AbstractCPropertyTab.setVisible(AbstractCPropertyTab.java:253)
at org.eclipse.cdt.ui.newui.BinaryParsTab.setVisible(BinaryParsTab.java:221)
at org.eclipse.cdt.ui.newui.AbstractCPropertyTab.handleTabEvent(AbstractCPropertyTab.java:630)
at org.eclipse.cdt.ui.newui.AbstractPage.updateSelectedTab(AbstractPage.java:412)
at org.eclipse.cdt.ui.newui.AbstractPage$4.widgetSelected(AbstractPage.java:382)
```
Changed the execute to take the cwd to run the command in and
clean up the related code, including some error message
handling and removing some redundant code.
Fixes#125