The Eclipse Foundation makes available all content in this plug-in ("Content"). Unless otherwise
+indicated below, the Content is provided to you under the terms and conditions of the
+Eclipse Public License Version 1.0 ("EPL"). A copy of the EPL is available
+at http://www.eclipse.org/legal/epl-v10.html.
+For purposes of the EPL, "Program" will mean the Content.
+
+
If you did not receive this Content directly from the Eclipse Foundation, the Content is
+being redistributed by another party ("Redistributor") and different terms and conditions may
+apply to your use of any object code in the Content. Check the Redistributor's license that was
+provided with the Content. If no such license exists, contact the Redistributor. Unless otherwise
+indicated below, the terms and conditions of the EPL still apply to any source code in the Content
+and such source code may be obtained at http://www.eclipse.org.
+Providing a solution to concurrency problems is the primary design goal
+of DSF. To that end DSF imposes a rather draconian
+restriction on services that use it: 1)
+All service interface methods must be called using a single designated
+dispatch thread, unless explicitly stated otherwise, 2) The dispatch
+thread should never be used to make a blocking call (a call that waits
+on I/O or a call that makes a long-running computation). What
+the first restriction effectively means, is that the dispatch thread
+becomes a global "lock" that all DSF services in a given session
+share with each other, and which controls access to most of services'
+shared data. It's important to note that multi-threading is still allowed
+within individual service implementation. but when crossing the service
+interface boundaries, only the dispatch thread can be used. The
+second restriction just ensures that the performance of the whole
+system is not killed by one service that needs to read a huge file over
+the network. Another way of looking at it is that the
+service implementations practice co-operative multi-threading using the
+single dispatch thread.
+
+There are a couple of obvious side effects that result from this rule:
+
+
When executing within the dispatch thread, the state of the
+services is guaranteed not to change. This means that
+thread-defensive programming techniques, such as making duplicates of
+lists before iterating over them, are not necessary. Also it's
+possible to implement much more complicated logic which polls the state
+of many objects, without the worry about dead-locks.
+
Whenever a blocking operation needs to be performed, it must be
+done using an asynchronous method. By the time the operation is
+completed, and the caller regains the dispatch thread, this caller may
+need to retest the relevant state of the system, because it could
+change completely while the asynchronous operation was executing.
+
+
The Mechanics
+
java.util.concurrent.ExecutorService
+
+DSF builds on the vast array of tools added in Java 5.0's
+java.util.concurrent package (see http://java.sun.com/j2se/1.5.0/docs/guide/concurrency/index.html
+for details), where the most important is the ExecutorService
+interface. ExecutorService
+is a formal interface for submitting Runnable objects that will be
+executed according to executor's rules, which could be to execute the
+Runnable immediately,
+within a thread pool, using a display thread,
+etc. For DSF, the main rule for executors is that they have
+to use a single thread to execute the runnable and that the runnables
+be executed in the order that they were submitted. To give the
+DSF clients and services a method for checking whether they are
+being called on the dispatch thread, we extended the ExecutorService
+interface as such:
+
public interface DsfExecutor extends ScheduledExecutorService { /** * Checks if the thread that this method is called in is the same as the * executor's dispatch thread. * @return true if in DSF executor's dispatch thread */ public boolean isInExecutorThread(); }
+The Done object
+encapsulates the return value of an asynchronous call in DSF. It
+is actually merely a Runnable with
+an attached org.eclipse.core.runtime.IStatus
+object , but it can be extended by the services or clients to hold
+whatever additional data is needed. Typical pattern in how
+the Done object is used,
+is as follows:
+
Service: public class Service { void asyncMethod(Done done) { new Job() { public void run() { // perform calculation ... done.setStatus(new Status(IStatus.ERROR, ...)); fExecutor.execute(done); } }.schedule(); } }
Client: ... Service service = new Service(); final String clientData = "xyz"; ... service.asynMethod(new Done() { public void run() { if (getStatus().isOK()) { // Handle return data ... } else { // Handle error ... } } }
+The service performs the asynchronous operation a background thread,
+but
+it can still submit the Done runnable
+with the executor. In other words, the Done and other runnables can be
+submitted from any thread, but will always execute in the single
+dispatch thread. Also if the implementation of the asyncMethod() is non-blocking,
+it does not need to start a job, it could just perform the operation in
+the dispatch thread. On the client side, care has to be taken to
+save appropriate state before the asynchronous method is called,
+because by the time the Done is
+executed, the client state may change.
+
+The java.lang.concurrent
+package
+doesn't already have a Done,
+because the generic concurrent
+package is geared more towards large thread pools, where clients submit
+tasks to be run in a style similar to Eclipse's Jobs, rather than using
+the single dispatch thread model of DSF. To this end, the
+concurrent package does have an equivalent object, Future.
+Future has methods that
+allows the client to call the get()
+method, and block while waiting for a result, and for this reason it
+cannot
+be used from the dispatch thread. But it can be used, in a
+limited way, by clients which are running on background thread that
+still
+need to retrieve data from synchronous
+DSF methods. In this case the code might look like the
+following:
+
Service: public class Service { int syncMethod() { // perform calculation ... return result; } }
Client: ... DsfExecutor executor = new DsfExecutor(); final Service service = new Service(executor); Future<Integer> future = executor.submit(new Callable<Integer>() { Integer call() { return service.syncMethod(); } }); int result = future.get();
+The biggest drawback to using Future
+with DSF services, is that it does not work with
+asynchronous methods. This is because the Callable.call()
+implementation
+has to return a value within a single dispatch cycle. To get
+around this, DSF has an additional object called DsfQuery, which works like a Future combined with a Callable, but allows the
+implementation to make multiple dispatches before setting the return
+value to the client. The DsfQuery object works as follows:
+
+
+
Client creates the query object with its own implementation of DsfQuery.execute().
+
+
Client calls the DsfQuery.get()
+method on non-dispatch thread, and blocks.
+
The query is queued with the executor, and eventually the DsfQuery.execute() method is
+called on the dispatch thread.
+
The query DsfQuery.execute()
+calls synchronous and asynchronous methods that are needed to do its
+job.
+
The query code calls DsfQuery.done()
+method with the result.
+
The DsfQuery.get()
+method un-blocks and returns the result to the client.
+
+The point of DSF concurrency can be most easily explained through
+a practical example. Suppose there is a viewer which needs to
+show data that originates from a remote "provider". There is a
+considerable delay in transmitting the data to and from the provider,
+and some delay in processing the data. The viewer is a
+lazy-loading table, which means that it request information only about
+items that are visible on the screen, and as the table is scrolled, new
+requests for data are generated. The diagram below illustrates
+the
+logical relationship between components:
+
+.
+
In detail, these components look like this:
+
+Table Viewer
+
The table viewer is the standard
+org.eclipse.jface.viewers.TableViewer,
+created with SWT.VIRTUAL
+flag. It has an associated content
+provider, SlowDataProviderContentProvider) which handles all the
+interactions with the data provider. The lazy content provider
+operates in a very simple cycle:
+
+
Table viewer tells content provider that the input has changed by
+calling IContentProvider.inputChanged().
+This means that the content provider has to query initial state of the
+data.
+
Next the content provider tells the viewer how many elements
+there are, by calling TableViewer.setItemCount().
+
At this point, the table resizes, and it requests data values for
+items that are visible. So for each visible item it calls: ILazyContentProvider.updateElement().
+
After calculating the value, the content provider tells the table
+what the value is, by calling TableViewer.replace().
+
If the data ever changes, the content provider tells the table to
+rerequest the data, by calling TableViewer.clear().
+
+Table viewer operates in the
+SWT display thread, which means that the content provider must switch
+from the display thread to the DSF dispatch thread, whenever it is
+called by the table viewer, as in the example below:
+
public void updateElement(final int index) { assert fTableViewer != null; if (fDataProvider == null) return;
fDataProvider.getExecutor().execute( new Runnable() { public void run() { // Must check again, in case disposed while redispatching. if (fDataProvider == null) return;
queryItemData(index); }}); }
+Likewise, when the content provider calls the table viewer, it also has
+to switch back into the display thread as in following example, when
+the content provider receives an event from the data provider, that an
+item value has changed.
+
public void dataChanged(final Set<Integer> indexes) { // Check for dispose. if (fDataProvider == null) return;
// Clear changed items in table viewer. if (fTableViewer != null) { final TableViewer tableViewer = fTableViewer; tableViewer.getTable().getDisplay().asyncExec( new Runnable() { public void run() { // Check again if table wasn't disposed when // switching to the display thread. if (tableViewer.getTable().isDisposed()) return; // disposed for (Integer index : indexes) { tableViewer.clear(index); } }}); } }
+All of this switching back and forth between threads makes the code
+look a lot more complicated than it really is, and it takes some
+getting used to, but this is the price to be paid for multi-threading.
+Whether the participants use semaphores or the dispatch thread, the
+logic is equally complicated, and we believe that using a single
+dispatch thread, makes the synchronization very explicit and thus less
+error-prone.
+
Data Provider Service
+
The data provider service interface, DataProvider, is very similar
+to that of the lazy content provider. It has methods to:
+
+
get item count
+
get a value for given item
+
register as listener for changes in data count and data values
+
+But this is a DSF interface, and all methods must be called on the
+service's dispatch thread. For this reason, the DataProvider interface returns
+an instance of DsfExecutor,
+which must be used with the interface.
+
Slow Data Provider
+
The data provider is actually implemented as a thread which is an
+inner class of SlowDataProvider
+service. The provider thread
+communicates with the service by reading Request objects from a shared
+queue, and by posting Runnable objects directly to the DsfExecutor but
+with a simulated transmission delay. Separately, an additional
+flag is also used to control the shutdown of the provider thread.
+To simulate a real back end, the data provider randomly invalidates a
+set of items and notifies the listeners to update themselves. It
+also periodically invalidates the whole table and forces the clients to
+requery all items.
+
Data and Control Flow
+
+This can be described in following steps:
+
+
The table viewer requests data for an item at a given index (SlowDataProviderContentProvider.updateElement).
+
+
The table viewer's content provider executes a Runnable in the DSF
+dispatch thread and calls the data provider interface (SlowDataProviderContentProvider.queryItemData).
+
Data provider service creates a Request object, and files it in a
+queue (SlowDataProvider.getItem).
+
Data provider thread de-queues the Request object and acts on it,
+calculating the value (ProviderThread.processItemRequest).
+
Data provider thread schedules the calculation result to be
+posted with DSF executor (SlowDataProvider.java:185).
+
The Done callback sets the result data in the table viewer (SlowDataProviderContentProvider.java:167).
+
+
+
Running the example and full sources
+This example is implemented in the org.eclipse.dd.dsf.examples
+plugin, in the org.eclipse.dd.dsf.examples.concurrent
+package.
+
+To run the example:
+
+
Build the test plugin (along with the org.eclipse.dsdp.DSF plugin)
+and launch the PDE.
+
+
Make sure to add the DSF
+Tests action set to your current perspective.
+
From the main menu, select DSF
+Tests -> Slow Data Provider.
+
A dialog will open and after a delay it will populate with data.
+
Scroll and resize dialog and observe the update behavior.
+
+
Initial Notes
+
+This example is supposed to be representative of a typical embedded
+debugger design problem. Embedded debuggers are often slow in
+retrieving and processing data, and can sometimes be accessed through a
+relatively slow data channel, such as serial port or JTAG
+connection. But as such, this basic example presents a couple
+of major usability problems
+
+
The data provider service interface mirrors the table's content
+provider interface, in that it has a method to retrieve a single piece
+of data at a time. The result of this is visible to the user as
+lines of data are filled in one-by-one in the table. However,
+most debugger back ends are in fact capable of retrieving data in
+batches and are much more efficient at it than retrieving data items
+one-by-one.
+
When scrolling quickly through the table, the requests are
+generated by the table viewer for items which are quickly scrolled out
+of view, but the service still queues them up and calculates them in
+the order they were received. As a result, it takes a very long
+time for the table to be populated with data at the location where the
+user is looking.
+
+
+These two problems are very common in creating UI for embedded
+debugging, and there are common patterns which can be used to solve
+these problems in DSF services.
+
Coalescing
+Coalescing many single-item requests into fewer multi-item requests is
+the surest way to improve performance in communication with a remote
+debugger, although it's not necessarily the simplest. There are
+two basic patterns in which coalescing is achieved:
+
+
The back end provides an interface for retrieving data in large
+chunks. So when the service implementation receives a request for
+a single item, it retrieves a whole chunk of data, returns the single
+item, and stores the rest of the data in a local cache.
+
The back end providers an interface for retrieving data in
+variable size chunks. When the service implementation receives a
+request for a single item, it buffers the request, and waits for other
+requests to come in. After a delay, the service clears the buffer
+and submits a request for the combined items to the data provider.
+
+In practice, a combination of the two patterns is needed, but for
+purpose of an example, we implemented the second pattern in the
+"Input-Coalescing Slow Data Provider" (InputCoalescingSlowDataProvider.java).
+
+
Input Buffer
+
The main feature of this pattern is a buffer for holding the
+requests before sending them to the data provider. In this
+example the user requests are buffered in two arrays: fGetItemIndexesBuffer and fGetItemDonesBuffer. The
+DataProvider.getItem()
+implementation is changed as follows:
+
public void getItem(final int index, final GetDataDone<String> done) { // Schedule a buffer-servicing call, if one is needed. if (fGetItemIndexesBuffer.isEmpty()) { fExecutor.schedule( new Runnable() { public void run() { fileBufferedRequests(); }}, COALESCING_DELAY_TIME, TimeUnit.MILLISECONDS); }
// Add the call data to the buffer. // Note: it doesn't matter that the items were added to the buffer // after the buffer-servicing request was scheduled. This is because // the buffers are guaranteed not to be modified until this dispatch // cycle is over. fGetItemIndexesBuffer.add(index); fGetItemDonesBuffer.add(done); }
+And method that services the buffer looks like this:
+
public void fileBufferedRequests() { // Remove a number of getItem() calls from the buffer, and combine them // into a request. int numToCoalesce = Math.min(fGetItemIndexesBuffer.size(), COALESCING_COUNT_LIMIT); final ItemRequest request = new ItemRequest(new Integer[numToCoalesce], new GetDataDone[numToCoalesce]); for (int i = 0; i < numToCoalesce; i++) { request.fIndexes[i] = fGetItemIndexesBuffer.remove(0); request.fDones[i] = fGetItemDonesBuffer.remove(0); }
// Queue the coalesced request, with the appropriate transmission delay. fQueue.add(request);
// If there are still calls left in the buffer, execute another // buffer-servicing call, but without any delay. if (!fGetItemIndexesBuffer.isEmpty()) { fExecutor.execute(new Runnable() { public void run() { fileBufferedRequests(); }}); } }
+The most interesting feature of this implementation is the fact that
+there are no semaphores anywhere to control access to the input
+buffers. Even though the buffers are serviced with a delay and
+multiple clients can call the getItem()
+method, the use of a single
+dispatch thread prevents any race conditions that could corrupt the
+buffer data. In real-world implementations, the buffers and
+caches that need to be used are far more sophisticated with much more
+complicated logic, and this is where managing access to them using the
+dispatch thread is ever more important.
+
Cancellability
+
Table Viewer
+
+Unlike coalescing, which can be implemented entirely within the
+service, cancellability requires that the client be modified as well
+to take advantage of this capability. For the table viewer
+content provider, this means that additional features have to be
+added. In CancellingSlowDataProviderContentProvider.java
+ILazyContentProvider.updateElement()
+was changes as follows:
+
public void updateElement(final int index) { assert fTableViewer != null; if (fDataProvider == null) return;
// Calculate the visible index range. final int topIdx = fTableViewer.getTable().getTopIndex(); final int botIdx = topIdx + getVisibleItemCount(topIdx);
fCancelCallsPending.incrementAndGet(); fDataProvider.getExecutor().execute( new Runnable() { public void run() { // Must check again, in case disposed while redispatching. if (fDataProvider == null || fTableViewer.getTable().isDisposed()) return; if (index >= topIdx && index <= botIdx) { queryItemData(index); } cancelStaleRequests(topIdx, botIdx); }}); }
+Now the client keeps track of the requests it made to the service in fItemDataDones, and above, cancelStaleRequests() iterates
+through all the outstanding requests and cancels the ones that are no
+longer in the visible range.
+
Data Provider Service
+
+
The data provider implementation
+(CancellableInputCoalescingSlowDataProvider.java),
+builds on top of the
+coalescing data provider. To make the canceling feature useful,
+the data provider service has to limit the size of the request
+queue. This is because in this example which simulates
+communication with a target and once requests are filed into the
+request
+queue, they cannot be canceled, just like a client can't cancel
+request once it sends them over a socket. So instead, if a flood
+of getItem()
+calls comes in, the service has to hold most of them in the coalescing
+buffer in case the client decides to cancel them. Therefore the
+fileBufferedRequests()
+method includes a simple check before servicing
+the buffer, and if the request queue is full, the buffer servicing call
+is delayed.
+
if (fQueue.size() >= REQUEST_QUEUE_SIZE_LIMIT) { if (fGetItemIndexesBuffer.isEmpty()) { fExecutor.schedule( new Runnable() { public void run() { fileBufferedRequests(); }}, REQUEST_BUFFER_FULL_RETRY_DELAY, TimeUnit.MILLISECONDS); } return; }
+Beyond this change, the only other significant change is that before
+the requests are queued, they are checked for cancellation.
+
Final Notes
+
+The example given here is fairly simplistic, and chances are that the
+same example could be implemented using semaphores and free threading
+with perhaps fewer lines of code. But what we have found is that
+as the problem gets bigger, the amount of
+features in the data provider increases, the state of the
+communication protocol gets more complicated, and the number of modules
+needed in the service layer increases, using free threading and
+semaphores does not safely scale. Using a dispatch thread for
+synchronization certainly doesn't make the inherent problems of the
+system less complicated, but it does help eliminate the race conditions
+and deadlocks from the overall system.
+
Coalescing and Cancellability are both optimizations. Neither
+of these optimizations affected the original interface of the service,
+and one of them only needed a service-side modification. But as
+with all optimizations, it is often better to first make sure that the
+whole system is working correctly and then add optimizations where they
+can make the biggest difference in user experience.
+
The above examples of optimizations can take many forms, and as
+mentioned with coalescing, caching data that is retrieved from the data
+provider is the most common form of data coalescing. For
+cancellation, many services in DSF build on top of other services,
+which means that even a low-level service can cause a higher
+level service to retrieve data, while another event might cause it to
+cancel those requests. The perfect example of this is a Variables
+service, which is responsible for calculating the value of expressions
+shown in the Variables view. The Variables service reacts to the
+Run Control service, which issues a suspended event and then requests a
+set of variables to be evaluated by the debugger back end. But as
+soon as a resumed event is issued by Run Control, the Variables service
+needs to cancel the pending evaluation requests.
+
+
+
+
+
diff --git a/plugins/org.eclipse.dd.doc.dsf/docs/dsf_data_model.html b/plugins/org.eclipse.dd.doc.dsf/docs/dsf_data_model.html
new file mode 100644
index 00000000000..bd1b40112e6
--- /dev/null
+++ b/plugins/org.eclipse.dd.doc.dsf/docs/dsf_data_model.html
@@ -0,0 +1,286 @@
+
+
+
+
+ DSF Data Model
+
+
+
The data model aspect of DSF is only partially complete as compared
+to the Concurrency and Services Models. The goals for its design
+are:
+
+
+
Separate the structure of the
+data in the services from the model used for presentation in views.
+This seems like a basic model-viewer separation, which is something
+that we theoretically have already. But in reality the current
+platform debug model APIs closely correspond to how the data is
+laid out in debug views, and even with the flexible hierarchy views it
+is
+difficult to provide alternative layouts.
+
Allow for a modular
+implementation of services that contribute to the data model.
+
+
+
Perform well with large
+data sets.
+
Make the data model interfaces
+convenient to use by other services as well as by views.
+Some interim designs of DSF data model APIs were very well suited for
+populating views (though asynchronous) content and label provider, but
+were very difficult to use for other purposes, such as by another
+service, or a client that creates a dialog. This led to services
+implementing two sets of interfaces for the same data, which was more
+expensive to develop and maintain.
+
+
Allow for easy changes to the
+layout of data in views. This is from the point of view of
+a debugger implementer that would like to modify the standard layout of
+debugger data.
+
+
Allow the users to modify the
+layout of data in views. And this is a logical extension
+of the previous goal.
+
+
+That's a pretty ambitious set of goals to keep in mind, which partly
+explains why the design is not fully complete yet. In particular,
+the last goal doesn't have any implementation at this point. But
+other than that the, we believe that our current design mostly
+meets the other goals. It remains to be seen how well it will
+hold up
+beyond a prototype implementation.
+
The DSF data model is divided into two parts: a non-UI part that
+helps services expose data in a consistent form, and a UI part that
+helps viewers present the data. They are described separately in
+the two sections below.
+
+
Timers Example
+
A "timers
+example" is included with the DSF plugins which
+demonstrates the use of data model and view model
+APIs. It is probably much easier to digest this document
+when referring to this example for usage.
+
+As stated before, the aim of this API is to allow services to provide
+data with just enough common information, so that it can be easily
+presented in the view, but with a simple enough design, so that the
+data can be accessed by non-viewer clients. The type of data in
+services can vary greatly from service to service, some data for
+example:
+
+
service data might be extremely large and thus may only be
+retrieved from a back end process in small chunks, while some service
+data might be always stored locally in the service
+
+
data might take a very long time to retrieve, or it could be
+instantaneous
+
+
some services might support canceling of the request while it is
+being processed, while other services might not
+
+
some data may change very frequently, other data may not change
+at all
+
+
+The data model API tries to find a common denominator for these
+divergent properties and imposes the following restrictions:
+
+
Each "chunk" of data that comes from a service has a
+corresponding IDataModelContext (Data Model Context)
+object.
+
+
The DM-Context objects are to be generated by the data model services (IDataModelService) with either
+synchronous or asynchronous methods, and taking whatever arguments are
+needed. Put differently, how DM-Contexts are created is up to the
+service.
+
The service will provide a method for retrieving each "chunk" of
+model data (IDataModelData)
+using a method that requires no other arguments besides the DM-Contexts.
+
+
DM-Context (IMContext)
+
+The DM-Contexts are the most
+important part of this design, so they warrant a closer look. The
+interface is listed below:
+
public interface IDataModelContext<V extends IDataModelData> extends IAdaptable { public String getSessionId(); public String getServiceFilter(); public IDataModelContext[] getParents(); }
+First of all the object extends IAdaptable,
+which allows clients to use these objects as handles that are stored
+with UI components. However the implementation of IDataModelData.getAdapter()
+presents a particular challenge. If the standard platform method
+of retrieving an adapter is used (PlatformObject.getAdapter()),
+then there can only be one adapter registered for a given DM-Context class,
+which has to be shared by all the DSF sessions that are running
+concurrently. Thus one debugger that implements a IStack.IFrameDMContext, would have to
+have the same instance of
+IAsynchronousLabelAdapter as another debugger implementation
+that is running at the same time. To overcome this problem, DSF
+provides a method for registering adapters with a session using DsfSession.registerModelAdapter(),
+instead of with the platform (Platform.getAdapterManager().registerAdapters()).
+
+
The getSessionId()
+method serves two purposes. First, it allows the
+IAdapter.getAdapter()
+implementation to work as described above. Second, it allows clients to
+access the correct dispatch thread (DsfSession.getSession(id).getExecutor())
+for calling the service that the DM-Context originated from.
+
+
The getServiceFilter()
+method is actually included to allow future development. It is
+intended to allow the client to precisely identify the service that
+the DM-Context originated from, without having to examine the exact class type
+of the DM-Context. But this functionality will not really be needed
+until we start writing generic/data-driven clients.
+
+
The getParents()
+method allows the DM-Context to be connected together into something that can
+be considered a "model". Of course, most debugger data objects,
+require the context of other objects in order to make sense: stack
+frame is meaningless without the thread, debug symbols belong to a
+module, which belongs to a process, etc. In other words, there is
+some natural hierarchy to the data in debug services which needs to be
+accessible through the data model APIs. This hierarchy may be the
+same hierarchy that is to be shown in some debug views, but it doesn't
+have to be. More importantly, this hierarchy should allow for a
+clean separation of debug services, and for a clear dependency graph
+between these services.
+This is the component which allows the DSF data model to be presented
+in
+the views with different/configurable layouts. It is tightly
+integrated with the recently added (and still provisional)
+flexible-hierarchy viewers in the org.eclipse.debug.ui
+plugin (see EclipseCon 2006 presentation
+for more details). Actually, the platform flexible hierarchy
+framework already provides all the adapter interfaces needed to present
+the DSF data model in the viewers, and it is possible to do
+that. However the flexible hierarchy views were not specifically
+designed for DSF, and there are a few ugly patterns that emerge when
+using them with DSF data model interfaces directly:
+
+
Because of the nature of IAdaptable pattern, the flexible
+hierarchy label and content adapters have to have a single instance
+that works for all views that the objects appear in. This leads
+to a lot of if-else statements, which make the implementation difficult
+to follow.
+
+
There is a single adapter for all DSF data model elements in the
+tree (from the same session), so the adapters have even more if-else
+statements to handle the different elements in the viewer.
+
Most of DSF adapter work needs to be performed in the dispatch
+thread, so each handler starts with a re-dispatch call.
+
In all of this, the logic which determines the hierarchy of
+elements in the viewer is very hard to follow.
+
+The view model API tries to address these issues in the following way:
+
+
It divides the adapter work for different views in separate ViewModelProvider objects.
+
It defines the view layout in an object-oriented manner using the
+ IViewModelLayoutNode
+objects.
+
It consolidates the logic of switching to dispatch thread in one
+place, and allows the ViewModelProvider
+objects to work only in dispatch thread.
+
+
+
IViewModelLayoutNode
+The core of the logic in this design lies in the implementation of the IViewModelLayoutNode objects.
+This interface is listed below:
+
public interface IViewModelLayoutNode { public IViewModelLayoutNode[] getChildNodes(); public void hasElements(IViewModelContext parentVmc, GetDataDone<Boolean> done); public void getElements(final IViewModelContext parentVmc, GetDataDone<IViewModelContext[]> done); public void retrieveLabel(IViewModelContext vmc, final ILabelRequestMonitor result); public boolean hasDeltaFlags(IDataModelEvent e); public void buildDelta(IDataModelEvent e, ViewModelDelta parent, Done done); public void sessionDispose(); }
+The getChildNodes()
+method allows these layout nodes to be combined into a tree structure,
+which mimics the layout of elements in the view. What the
+children are depends on the implementation, some may be configurable
+and
+some may be fixed.
+
+The hasElements()
+and getElements()
+methods generate the actual elements that will appear in the
+view. The methods are analogous to the flexible hierarchy API
+methods: IAsynchronousContentAdapter.isContainer()
+and IAsynchronousContentAdapter.retrieveChildren()
+and are pretty straightforward to implement. Also retrieveLabel()
+is directly analogous to
+IAsynchronousLabelAdapter.retrieveLabel().
+
+The hasDeltaFlags()
+and buildDelta()
+are used to generate model deltas in response to service events. These
+are discussed in the next section.
+
+Finally, in most cases the elements in the views correspond
+directly to an IDataModelContext
+(DM-Context) objects of a specific type. In those cases, the DMContextVMLayoutNode
+abstract class implements the common functionality in that pattern.
+
Model deltas
+The hasDeltaFlags() and buildDelta() methods are used
+to implement the IModelProxy adapter,
+and are the most tricky aspect of this design. The difficulty is
+that the flexible hierarchy views require that the IModelProxy translate data
+model-specific events, into generic model deltas that can be
+interpreted by the viewer. The deltas (IModelDelta) are tree
+structures which are supposed to mirror the structure of nodes in the
+tree, and which contain flags that tell the viewer what has changed in
+the view and how.* This means that if the
+model proxy receives an event for some IDataModelContext (DM-Context) object,
+it needs to know if this object is in the viewer's tree, and what is
+the full path (or paths) that leads to this object.
+
The model delta is generated by first calling the top layout node's hasDeltaFlags() with the
+received event, which then can either return true or ask any of its
+children if they have deltas (which in turn returns true or calls its
+children, etc). If a node returns true for hasDeltaFlags(), then the
+asynchronous buildDelta()
+is called with the event and a parent delta node, to generate the delta
+elements and flags for its node. Once the layout node generates
+its delta objects, it still needs to call its children, which in turn
+add their delta information, and so on.
+
+
* It's not strictly true that a full path to
+an element always has to be present for model delta's to work. If
+the full path is not present, the viewer will try to find the element
+using an internal map that it keeps of all of the elements it
+knows.
+But since the viewer is lazy loading, it is possible (and likely) that
+the element affected by an event is not even known to the viewer at
+time of the event, and for some delta actions, IModelDelta.SELECT and IModelDelta.EXPAND, this is not
+acceptable.
+
+
+
diff --git a/plugins/org.eclipse.dd.doc.dsf/docs/dsf_mi_instructions.html b/plugins/org.eclipse.dd.doc.dsf/docs/dsf_mi_instructions.html
new file mode 100644
index 00000000000..7d6e2b51153
--- /dev/null
+++ b/plugins/org.eclipse.dd.doc.dsf/docs/dsf_mi_instructions.html
@@ -0,0 +1,135 @@
+
+
+
+
+ GDB/MI Debugger on top of DSF - Instructions
+
+
+
Create a new "Managed make build project" called "hello".
+
Create a simple hello.c source file:
+
+
+
#include <stdio.h> int main(void) { printf("Hello world"); }
+
+
+
Build the project.
+
+
Create a new "DSF C/C++ Local Application" launch
+configuration (one with the pink icon) and set the executable and entry
+point to "main"
+
+
Launch and step through.
+
If the "source not found" page appears, the a path mapping needs
+to be created. This is an issue with latest cygwin gdb.
+
+
+
Click on the "Edit source lookup" button in the editor, or
+right click on the launch node in Debug View and select "Edit source
+lookup"
+
Click on the "Add..." button
+
Select "Path Mapping" and click OK.
+
+
Select the new "Path Mapping" source container and click the
+"Edit..." button.
+
Once again, click the "Add..." button to create a mapping.
+
Enter the path to map from. Look at the stack frame label
+in Debug view, if the filename is something like
+"/cygdrive/c/workspace/hello/hello.c", enter the path to the first real
+directory "/cygdrive/c/workspace".
+
Enter the correct path to the directory entered above, in the
+file system. In example above, it would be "C:\workspace".
+
Click OK three times and you'll be back in Kansas.... ehm Debug
+view that is.
+
If the source doesn't show up right away, try stepping once.
+
+
+
+
Supported Platforms
+
+Currently only Windows with cygwin GDB is supported.
+
+
+
+
Current Features
+
+
+
Launching
+
+
The "DSF C/C++Local Application" is the standard CDT launch
+configuration minus some of the features.
+
+
What is NOT working here is
+
+
+
Debugger tab: the selection of debugger back ends (gdb/mi,
+Cygwin gdb debugger, etc.), tThe implementation is currently hard-wired
+for Cygwin,
+
Debugger tab: Debugger Options section
+
+
+
Debug view
+
+
Single thread debugging only.
+
Terminating
+
+
Stepping
+
+
Resume/Suspend
+
+
+
Console support
+
+
+
GDB process output
+
NO user process console support
+
+
+
Breakpoints
+
+
Basic CDT breakpoint support implemented
+
+
no filtering support,
+
+
no advanced options (hardware, temporary, etc)
+
no watchpoints
+
+
+
+
Variables
+
+
not yet
+
+
+
+
+Updated Aug 25th, 2006
+
+
+
diff --git a/plugins/org.eclipse.dd.doc.dsf/docs/dsf_services_model-1.png b/plugins/org.eclipse.dd.doc.dsf/docs/dsf_services_model-1.png
new file mode 100644
index 00000000000..b593371ee80
Binary files /dev/null and b/plugins/org.eclipse.dd.doc.dsf/docs/dsf_services_model-1.png differ
diff --git a/plugins/org.eclipse.dd.doc.dsf/docs/dsf_services_model-2.png b/plugins/org.eclipse.dd.doc.dsf/docs/dsf_services_model-2.png
new file mode 100644
index 00000000000..0af43dc6a77
Binary files /dev/null and b/plugins/org.eclipse.dd.doc.dsf/docs/dsf_services_model-2.png differ
diff --git a/plugins/org.eclipse.dd.doc.dsf/docs/dsf_services_model.html b/plugins/org.eclipse.dd.doc.dsf/docs/dsf_services_model.html
new file mode 100644
index 00000000000..8380a0e4ae3
--- /dev/null
+++ b/plugins/org.eclipse.dd.doc.dsf/docs/dsf_services_model.html
@@ -0,0 +1,363 @@
+
+
+
+
+ DSF Services Model
+
+
+
Debugger Services Framework (DSF) is primarily a service framework
+defining rules for how
+services should be registered, discovered, organized into functional
+groups, communicated with, and started/ended. These rules help to
+organize the services into a functional system that efficiently
+abstracts various debugger back end capabilities.
+In OSGI any class can be registered as a service. In DSF,
+Services must implement the IDsfService
+interface, which requires that the service
+provide:
+
+
Access to the DsfExecutor that
+has to be used to access service methods.
+
Full list of properties used to uniquely identify the service in
+OSGI.
+
Startup and shutdown methods.
+
+For the first two items, a service must use the data it received from
+its constructor. For the third item, a service must register and
+unregister itself with OSGI. But beyond that, this is all that
+services have in common, everything else is up to the specific service
+interface.
+
+DSF services are organized into logical groups, called
+sessions. Sessions are only necessary because we want multiple
+instances of systems built with DSF services to run at the same
+time This is because there is only a single OSGI service
+registry, so if multiple services are registered with a given class
+name, OSGI will not be able to distinguish between the two based on the
+class name alone. So there is an additional property which is
+used by every DSF service when registering with OSGI, IDsfService.PROP_SESSION_ID.
+
+
A Session object
+(TODO: link javadoc) has the following data associated with it:
+
+
+
Session ID - A String object that is unique
+among all other sessions. Its ID is used by services as the IDsfService.PROP_SESSION_ID
+property, and it is used by the client to obtain the Session object instance.
+
DsfExecutor
+- Each session has a single executor. This means that all the
+services in a single session share the same executor and dispatch
+thread, and conversely it means that when operating in the dispatch
+thread, the state of all the services in a session will remain the same
+until the end of a dispatch. Note: multiple sessions could share the same DsfExecutor.
+
Service startup counter
+- An integer counter which is read and incremented by every service
+that is started in a session. This counter is used to determine
+the dependency order among services, which is used by events.
+
Event listener list
+- This will be covered in the "Events" section.
+
Adapter list - A
+list of adapters, providing functionality analogous to runtime's org.eclipse.core.internal.runtime.AdapterManager.
+Sessions need to manage their own lists of adapters, so that IAdapter objects which
+originate from DSF services can provider different adapters, based
+on the session that they originate from. This feature is covered
+in detail in the "DSF Data Model" document.
+
+
+
The Session class also has a number of static features used to
+manage Session objects:
+
+
Session ID counter
+- Used to generate new session IDs.
+
Methods for starting
+and ending sessions
+
+
Session started/ended
+event listener list - This allows clients to be notified when
+sessions are created or terminated, which is used mostly for clean-up
+purposes.
+
+
+
Startup/Shutdown
+Managing the startup and shutdown process is often the most complicated
+aspect of modular systems. The details of how the startup and
+shutdown processes should be performed are also highly dependent on the
+specifics of the system and service implementations. To help
+with this, DSF provides two simple guidelines:
+
+
There should be a clear
+dependency tree of all services within a session - When the
+dependencies between services are clearly defined, it is possible to
+bring-up and bring-down the services in an order that guarantees each
+running service can access all of the services that it depends on.
+
There needs to be a
+single point of control, which brings up and shuts down all the
+services. - In other words, services should not initialize or
+shut-down themselves, based on some global event that they are all
+listening to. But rather an external piece of logic needs to be
+in charge of performing this operation.
+
+The main implication of the first guideline, is that each service can
+get and hold onto references to other services, without having to
+repeatedly check, whether the service references are still valid.
+This is because if a given service is to be shut-down, all services
+that depend on this service will already have been shut down. The
+second guideline, simply ensures that startup and shutdown procedures
+are clear and easy to follow.
+
+OSGI methods for obtaining and tracking services can be rather
+complicated. To obtain a reference to a service, the client has
+to:
+
+
Get a reference to a BundleContext
+ object, which can be retrieved from the plugin class.
+
Obtain a service reference object by calling BundleContext.getServiceReference();
+
Obtain an instance of the service by calling BundleContext.getService(ServiceReference).
+
+But worst of all, when the client is finished using the service, it has
+to call BundleContext.ungetService(ServiceReference),
+because the bundle context counts the used references to a given
+service. All this paperwork is useful for services which manage
+their own life-cycle, and could be un-registered at any time. To
+make managing references to these kinds of services, OSGI provides a
+utility class, called ServiceTracker.
+
+
For DSF services, the life cycle of the services is much more
+predictable, but the process of obtaining a reference to a service is
+just as onerous. DSF provides its own utility, which is
+separate from the ServiceTracker,
+named DsfServicesTracker.
+The differences between the two are listed in table below:
+
+
+
+
+
Property
+
+
OSGI
+ ServiceTracker
+
+
DSF
+ DsfServicesTracker
+
+
+
+
Number
+of services tracked
+
+
While
+not strictly limited, it is optimized for tracking services of a single
+class type, or more typically to track a single service reference.
+
+
Designed
+to track services within a single DSF session.
+
+
+
+
When
+are service references obtained
+
+
Obtain
+references automatically as the services register themselves.
+
+
Service
+references are obtained as requested by the client, and cached.
+
+
+
Synchronization
+
+
Multi-thread
+accessible.
+
+
Can
+be accessed only on the session's dispatch thread.
+
+
+
+
Clean-up
+
+
Automatically
+un-gets references for services that are shut down.
+
+
Client
+must listen to session events, and clean up as needed.
+
+
+
+
+
Both trackers are useful. Service implementations that depend
+on a number of other services are most likely to use DSF ServicesTracker, while some
+clients, which use a single service may find OSGI ServiceTracker more suitable.
+
+
Events
+Events are the most un-conventional component of the services package
+and probably most likely to need modifications to the design by the
+community. The design goal of
+the event system is to allow a hierarchy of event classes, where a
+listener could register itself for a specific event class or for all
+events which derive from a base class. The use case for this
+behavior is in the data model, where we would like to have the ability
+to capture all model-related events with a generic listener while at
+the same time allowing for services to fully use class types.
+
The event model is made up of the following components:
+
+
+
DsfServiceEventHandler annotation
+- This is the only indicator that a given method is an event
+listener. The class with the event handler doesn't have to
+implement any interfaces, but it must be public, which is a big
+drawback.
+
Session.addServiceEventListener,
+ Session.removeServiceEventListener
+methods - These methods allow clients to register for an event
+based on an event class and a service filter, where the filter can be
+used to uniquely identify a service in case of services with multiple
+instances of same class.
+
Session.dispatchEvent method -
+This is the method that actually dispatches the event to the
+listeners.
+The method must be called by a service that generates the event.
+
+There are only a few more notes about the events mechanism:
+
+
The event is always dispatched in its own Runnable submitted to
+the session's DsfExecutor.
+
There is a slight convenience for clients not to have to register
+for each type of event separately.
+
There is a slight inconvenience for clients, because anonymous
+classes cannot be used as listeners, due to the public class
+requirement.
+DSF framework includes a set of service interfaces for a typical
+debugger implementation. Functionally, they are pretty much
+equivalent to the platform debug interfaces, but they are structured in
+a way that allows a debugger to implement only some of them. In
+order for the startup and shutdown process to work effectively, the
+dependencies between services need to be clearly defined. The
+dependencies between the main service interfaces are shown in the graph
+below:
+
+
It's also important to realize that it's unlikely that a single
+hierarchy of interfaces will adequately fit all the various debugger
+use cases, and it is likely that some interfaces will be needed which
+partially duplicate functionality found in other interfaces.
+An example of this in the proposed interface set are the interfaces
+which are used to initiate a debugging session. The INativeProcesses service is
+intended as the simple abstraction for native debuggers, where a
+debugger only needs an existing host process ID or an executable image
+name. Based on this a INativeProcess
+debugger implementation should be able to initiate a debugging session,
+and return run-control, memory, and symbol contexts that are required
+to carry out debugging operations. By comparison, IOS and ITarget are generic interfaces
+which allow clients to manage multiple target definitions, to
+examine a wide array of OS objects, and to attach a debugger to a
+process or some other debuggable entity.
+
+
+
Disclaimer
+Drafting large APIs that are intended to have many implementations and
+by clients is a notoriously difficult task. It is
+impossible to expect that a first draft of such interfaces will not
+require changes, and only time and multiple successful implementation
+can validate them. While we can draw upon many examples of
+debugger
+APIs in Eclipse in and our commercial debugger, this is a new API with
+a
+prototype that exercises only a small portion of its interfaces.
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/plugins/org.eclipse.dd.doc.dsf/docs/dsf_white_paper.html b/plugins/org.eclipse.dd.doc.dsf/docs/dsf_white_paper.html
new file mode 100644
index 00000000000..7b41f5c87d2
--- /dev/null
+++ b/plugins/org.eclipse.dd.doc.dsf/docs/dsf_white_paper.html
@@ -0,0 +1,315 @@
+
+
+
+
+ DSF White Paper
+
+
+
+DSF is a service framework for implementing the model and communication
+layers of Eclipse debuggers. The framework itself is actually
+very small in terms of lines of code, because it mostly builds on top
+of existing standard frameworks of OSGI services and Java 5.0
+concurrency features. The value of DSF is the set of utilities,
+patterns, and guidelines that together help solve some of the more
+difficult problems we have with existing Eclipse debugger
+implementations.
+
Design goals
+The
+primary design goal is to overcome the problems with
+existing Eclipse debuggers. These problems are:
+
+
Poor performance when debugging a remote
+target (over a slow connection).
+
+
Synchronous debugger communication, which
+results in poor throughput of data.
+
Amount of data that is retrieved from target
+is based on the data model, rather than on what's visible to the user
+on the screen.
+
No ability to filter, or to choose update
+policies, which could allow user to control what is retrieved from the
+target.
+
+
No modularity in APIs or debugger
+implementations.
+
+
Specialized debuggers must use forking and
+duplication of common code to provide unique features.
+
Degenerate debuggers (with a subset of
+functionality of conventional debuggers) must implement a lot of
+interfaces that are meaningless to their users.
+
It's difficult to modify or selectively
+replace interfaces, because all interfaces have references to each
+other.
+
+
Difficulty in
+customizing data presentation for different types of debugging.
+
+
Features
+The DSF features described below, more-or-less correspond one-to-one to
+the problems in the Design Goals section.
+
Concurrency Model
+It may be a surprise that simply adopting a threading model would solve
+performance problems with debugger communication, but indirectly, it
+actually does. The primary reason for poor performance with
+remote targets in debuggers such as CDT is the synchronous nature of
+target communication. When a request is made at the UI level that
+results in a command being sent to the target, then the client thread
+is blocked while the command is being processed. After the result
+if finally retrieved, the client makes the next request for data and is
+blocked again. In this pattern the responsiveness of the UI is
+slow, yet the majority of this performance hit is due to the latency of
+the communication channel to the debugger back end.
+
There is one major improvement to this pattern implemented in the
+platform already. The platform debugger views have been
+re-written so that they spin off a separate thread for each separable
+call to the debug model. The multiple threads each result in
+individual requests being sent to the target, and each thread is
+blocked waiting for the result. Overall the responsiveness of the
+view is improved because all the request threads execute in
+parallel. However, there is one obvious limitation of this
+approach: creating a lot of new threads, even when using a thread pool,
+is an expensive operation and can in itself degrade performance,
+therefore this solution doesn't scale well to programs that for example
+have thousands of threads, or threads, or variables.
+
+
There is also a more subtle limitation of using jobs. Most
+debuggers have a very lopsided performance characteristic, where it
+takes a long time to initiate a query for the target, but once a query
+is run, it takes relatively little extra time to retrieve larger
+amounts of data. Therefore, to better optimize the performance of
+communicating with a remote target, it is important to coalesce
+individual requests into queries for larger chunks of data. This
+is a rather complicated problem, mostly because the commands available
+in debugger back ends vary depending on the type of data being
+retrieved. Also different types of data require different types
+of coalescing. For example, where it might be possible to
+retrieve memory in arbitrarily sized chunks, registers may be
+retrievable only in groups. There is one thing all coalescing
+solutions will have in common, though: they need to convert the calls
+that are made to the service into objects, which can be compared,
+sorted, and pooled together. Management of such objects requires
+a lot of state information to be tracked by the service, and managing
+the cache of the request results requires even more state
+information.
+
Managing a lot of state information, which coalescing optimization
+requires, is exceedingly difficult in a free multi-threaded
+environment. This is because the more state information there is
+in the system, the more semaphores are needed to avoid race conditions.
+The more semaphores are used, the greater the chance that deadlocks
+will occur. There are many methods for managing concurrency in
+systems with a lot of state information, and they all have some
+drawbacks. One such example is the Eclipse resource system use of
+ISchedulingRule and
+jobs. Unfortunately this this concurrency model would not work
+well for the debugger because the resource system has a clearly defined
+hierarchy to its data: Workspace/Projects/File, so it’s easy to lock a
+portion of the tree and still allow other clients to interact with
+it. For debugger services, the relationship between state data is
+not clearly defined and often very complicated, so if scheduling rules
+were applied in a debugger implementation they would likely degrade
+performance, because each request would probably need to lock the
+entire system.
+
+
For its concurrency model, DSF imposes a strict threading
+model. All services that make
+up a debugger implementation must talk to each other using a single
+dispatch thread, and no service can make a blocking call while in the
+dispatch thread. Conceptually this rule can be interpreted
+as: all communication between services is accomplished by runnables in
+a thread pool, where the thread pool size is just one. The effect
+of this policy is that the dispatch thread acts as a single global
+semaphore, and when executing on the dispatch thread, a client or a
+service can perform arbitrarily complex operations, and can poll the
+sate of as many services as necessary without worrying about the state
+of the system changing concurrently. The single threading rule
+only applies to the service interfaces, and does not preclude
+multi-threading in the service implementations. In fact
+multi-threading is utilized more in this architecture because many
+blocking operations that would normally be performed on shared threads,
+possibly slowing the UI responsiveness, now need to be performed using
+background threads.
+
+
In summary, a restrictive threading model combined with asynchronous
+interfaces, is the DSF solution to communication performance problems
+because it allows debugger implementations to have highly complex logic
+that handles coalescing and cancelling of requests, intelligent caching
+of debugger requests, and other advanced features such as filtering and
+configurable update policies.
+
Services Model
+
+
Fortunately
+it's easier to see the connection between a
+services model and addressing modularity problems.
+
Most current
+debugger
+implementations don't make an effort to separate out different
+components that make
+up the data model and communication layers. It is true that UI
+components
+usually interact with clearly defined data model interfaces, and in
+case of CDT
+the data model is somewhat separated from the communication layer using
+the CDI
+interface. However within the CDT data model and communication
+layer
+interfaces, there are enough references between the various objects to
+make all
+of them essentially inter-dependent. Furthermore, in the
+implementation
+of these layers, components use internal knowledge of other
+components.
+This is perfectly acceptable if we assume that the debugger
+implementation is
+going to be used as a single module, and any extensions can be built on
+top of
+it. But, it is important that vendors be able to selectively pick
+and
+choose components which they would like to reuse "as is" and which
+components they would like to extend, modify, replace, or not use at
+all.
+In order to achieve that kind of modularity, a lot of design work has
+to go
+into interfaces not just between the major layers of implementation,
+but also
+between various components that make up these layers.
+
+
+
To help build
+a modular
+architecture, DSF builds on the OSGI services framework, by providing
+additional functionality of:
+
+
organizing services into sessions,
+
managing start-up and shut-down processes,
+
managing events between services.
+
+
Additionally,
+DSF includes an
+initial draft of service interfaces designed to build a modular
+debugger
+implementation. These interfaces must be validated, and this can
+only be
+realistically accomplished by implementing several full-featured and
+diverse
+debuggers. We are seeking additional debug tool vendors from the
+community to port to these interfaces in addition to Wind River.
+
Data Model
+
The problems
+of the data model are perhaps less severe than
+problems of performance and modularity, but this is an area with a lot
+of room
+for innovation. We are used to thinking of the debug data model
+in a
+rather rigid terms, where there is a defined hierarchy of debug
+targets,
+threads, stack frames, variables, sub-expressions, etc. We are
+also used
+to seeing standard debug views of threads, stack frames, locals, and
+watch. These expectations seem to be pretty accurately reflected
+in the
+platform debug model, on top of which all of the current Eclipse
+debuggers are
+based. This is a problem for two reasons:
+
+
The direct
+references between different types of objects prevent the debug model
+implementation from being modular.
+
Extensions to
+the debug model are limited to additions in functionality of the basic
+platform objects and some additional object types.
+
+
Fortunately
+in release 3.2, the Eclipse platform introduced
+a way to circumvent the standard platform model and to drive the
+content of
+most of the standard debugger views using a completely custom data
+model and a
+set of viewer adapters. DSF aims to take advantage of this new
+capability
+to address the above problems, as well as to provide the additional
+benefits
+of:
+
+
Improving
+performance by using the DSF dispatch thread model and asynchronous
+methods.
+
Giving the
+user ability to fully customize at runtime the content and layout of
+debugger views.
+
+
Points 1, 2,
+and 3 are a side effect of DSF's Threading
+Model and Services Model used in conjunction with the platform's
+flexible
+hierarchy interfaces. Point 4 is an innovative and exciting
+feature that
+naturally builds on top of the service model and flexible hierarchy.In the first release of DSF to open source,
+we have not yet implemented the capability described in point 4.
+The
+design for this feature calls for data driven, configurable views,
+where the
+configuration data drives the content and label providers to retrieve
+appropriate information from the data model.
+On the service side, there needs to be a published data model
+schema and
+a query language interpreter, which will retrieve the data for clients.We expect community discussion and design
+work to help solve this problem, and we intend to present
+implementations from
+our commercial product as one possible solution.
+
One final
+point is that although the
+DSF data model is fundamentally different than the platform debug model
+and the
+CDT extensions, a DSF debugger could easily be adapted to provide any
+of these
+API’s. This may require considerable effort, especially for
+extensive
+API’s like CDI, but is desirable and necessary to support the existing
+CDT
+community.