From 7158eeed0760be7aff59d2ec7b83a8988681c262 Mon Sep 17 00:00:00 2001 From: Pawel Piech Date: Mon, 24 Mar 2008 16:34:37 +0000 Subject: [PATCH] [220446][219907] Updated the doc plugin to point to the new DSF tutorials. --- .../docs/dsf_concurrency_model-1.png | Bin 6256 -> 0 bytes .../docs/dsf_concurrency_model.html | 432 ------------------ .../docs/dsf_data_model.html | 286 ------------ .../docs/dsf_mi_instructions.html | 135 ------ .../docs/dsf_services_model-1.png | Bin 2224 -> 0 bytes .../docs/dsf_services_model-2.png | Bin 1761 -> 0 bytes .../docs/dsf_services_model.html | 363 --------------- plugins/org.eclipse.dd.doc.dsf/toc.xml | 16 +- 8 files changed, 9 insertions(+), 1223 deletions(-) delete mode 100644 plugins/org.eclipse.dd.doc.dsf/docs/dsf_concurrency_model-1.png delete mode 100644 plugins/org.eclipse.dd.doc.dsf/docs/dsf_concurrency_model.html delete mode 100644 plugins/org.eclipse.dd.doc.dsf/docs/dsf_data_model.html delete mode 100644 plugins/org.eclipse.dd.doc.dsf/docs/dsf_mi_instructions.html delete mode 100644 plugins/org.eclipse.dd.doc.dsf/docs/dsf_services_model-1.png delete mode 100644 plugins/org.eclipse.dd.doc.dsf/docs/dsf_services_model-2.png delete mode 100644 plugins/org.eclipse.dd.doc.dsf/docs/dsf_services_model.html diff --git a/plugins/org.eclipse.dd.doc.dsf/docs/dsf_concurrency_model-1.png b/plugins/org.eclipse.dd.doc.dsf/docs/dsf_concurrency_model-1.png deleted file mode 100644 index 1bb373447d7bd0f17cd54f8298fc6f65658c8420..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 6256 zcmb7IXH-*Nvra<5D1dv zrDzaB2SJe=-}hVJk9&XJv(_nlpPAWn=FB{MX5x(vb)nZ-t^oi5s2=RL2><{JBR)q! zK*TG%I*&f_26odh&;S7HQ^J1TODB%6`ogRN0045@zb6p4_~kZU?c` z%=YG5e(+_`D3-r6fDIHE_W)^gCW$PQFC;a=@elm|^<$>?@g6w@0OSOKBLLKv5nWXH zuZ+|HkUHS+WE~*#F~iRY@_*kFXID-uUE1vI_LNpR85OSoOC&)Rc?+XF1{C3l?i$-z zDSAL7^jE-&0$3gUG1Jqri*qr4rAOxPTv3}H8NK?-nFfh3aYS4qG8ke%@iKNNI7E5M)_@JV>mW2sXncXP&W*nXvO^PVoj0 z+-Ks*L|JKDTDj2NVY?GM?wTIzQ7!D*nS`0@aDnFAB}Q(f|B#N#}f z{RaI?y}2?TknigAm8Q42&nfojJASl~d7;k45sY_`sai_+a48AbX4N`j-{B~olEKj% zx5{7JxgX+?Z`#)TiG?T)>8plI)WCj_5IwAsYnwESad$}*duu9867zDg$RTW!y=Y|Z zCc-J$B~99-EZpUnm)#0e)7y5%jp$fE!+lB~BsBnE(|raL--4{Ly}pqA+-+bXCfPK% zb0cHKYMLJQ=05ZJ!Jy;peY!7jTxMt)RwTsRU8mpbi^R$Ck!X`~D+|l}B>4f#CiXE9 z1J;o4&mc9+<}L7V3{s?HYp?joiH?7ff3iAG^u};Q`qDIoGQR(SLO2Vj;fcFC*^*(!7tR1a`o;)cS$F(@l`552e3Gj*|)pUeJ>#AGsSJPHqeYg3vLGa73 zjbM1t$GLayiDTQ_C-t3V-@*^trD;F(*MIr8CeEjpgao)6j!f!0#d zS=VKFEN3Cy02~w)6g98PV{cH(juXA*0-ssd?A@Z``F6a{61Lg@IKgElOut0uD;S?r zbn3Sls?+u2ZXTwxLupgD@!S3DqaNe3Kb^HhYuHx8ymx-5MWHS%5&Xe_SADZp()_t% z3<@fke~H}7ebGbnA@Y#Mg@u9^qYPQep4w#wBBe6z3R?O$O8ngU*4_(Ax027SNGm5^ z?2vrBK+3Rl!Gdn|j#94&QvR%+>=_#P6N$zVZaSWlWv zCD0%heIfOn&z%MBmY@{a*#X^%wTclFPb#$H**i5&`J`uvR-9%OCsQ&@N|*#D=iKV^ z2mfdW7^AAOw*3{HSIR5hB0;&(xpOq@)76p$SssVAgP1Zk=9bQ?cUPWbDbjZpLHI|t zrS8nK@^^F|qJD@LP9e%vFs3Hq<$S$25HM_FS->n?q4@f)Zf5QGD`%+;+0*T9E)G;v zuBizw1B8PdbfZvAkHQ!w#R(k|boic}$(kZCLowNsL+dA|Ut+hJZwo&5wT4zqTotRO zqlR@C%`S5J(Km+HCt} z00qwFSFEja>2on&IXMoe2Qhk2vB;D^mYx;n`dxn>_7FTU!oo0{avf|%)=h2Zo!0_- zGjf(4(6U|K9E3v1b*{(MM1YJ+Xi}QA!xqN?2^0K6 zMiMdtS0vyif}7rTfjvJzZO!Q0-kiH~W?PKVGUDB(jR@f?e?tNK4yRh%ZltPbBZfS| z_^E+YonAJLTY%}Si7am>m*IifrrPxc3;CzYMXcfEQohL_6+L=#yDF$nMe8U zm{m2W&$U|Xi&dGr%^ux|@#|b;Q1Npe$iMDoKHV}0#4nler-TvxTrd}>swY--&<_YL zuBZ#QsDPCTzTZ~ITE*9ex2yp~7)SJjgI9cKBLT1$lBzGaZz+N%8Dj)J{<}@un11nTO5e@UL@Lw&!u>2oI+p3pC-?#_h9%DmTl>*A9y&arSCc=Bw-b)MI z%!gQ$C^s%mt&!yRaUvLP3g-zWmgnaYCa_yyw3*}AtE?O3 zM?3aKHTQ!$iJIg>NK-_BxFi2yp+0s?*)G-Fn|$dnD#=W9b@9j8)H3)Iw6|}ciKh|{ z=d=C*%>KX{)&)8wj`Rlsj!88{d?kC>%UZay4?_lvGQ=moHgP`g)aT0dMuckvD-u~60Gedr}qAH z0TQZEHt8L}w9K_=Yy4d*&Ezd(GR)b(eoKExdj{c`LOwRym@TD&U6WP)+_S9)mvpGr z%a>H>ALM=BzM}WWP&B4y)`S}`b-;(G)O4a~5QF6G*~6QW)5|8adRVw8Ev3sc3-FMB zaiN%1`-m~6aJchw436S*%^Vzgs;E>_6#GH(HQGR6678jfqc3`d99c*bWZdKS4I=n4n;9w}*-|e0x@= ztrosDie|dPdFe^dlC7%!BlB@hHgOtsQjZk#k!^LQRVPz}t@w%ns{?bdFnK+T4frvE zCoiI&X5u;5)a)H@ZRDF5)tQAf)o&v`1n$j{ZEA~9qoIu!A*jd+iq7VD-isId;wng45%l9U*cAACz)7wq&(u)=I*n7O znU!wl(d-+^;xySGJzq;s7B~?MXurW(xU0ON=R9|9QiO$J4-rCENdJKlyKbvMp>I^J zag}#%TdU-Q7hekqo(y4rc)qm!*w-lc{;fiqB3#7w#oEEnVB?*K!U8>MvDaJlA79!N zIhD+^^Zj?T!GhUPvZi#@&c^N0uwz$yaOYK|gberPiPRTJsIv03n`HIUtXOvE6=!AB zlk->P45gLm49Vv@l-WvmokSNi{f9{#mx{sj#ToVb+7IaTABp~O{d_E6FmItV7Jxnf zb$R%TtgC8geBdf3apJ%XjD9dKJIj)L&JRmBhWbN)pjYX{9$3~3t3XJ;v4#LJp<0h?7HDK4k_cR{N9w0+$>4) z*e%pIhj;u)x6+J856qDBPaxf8YQcKHX7tl-XmRHHNc5CKiOVl0g?+!5ypSmxz^FXx z3T(z~Lz${q)k;~qNw1~tlMBbJn!eSvWg0WHseg62c{MZZ`z=sj4U&79#{XpX#)#KL0;D|-3mgI>EH!bA3Qw^28AAgJD zkCLC?DloJRY)0KO%zx>)yV)?dQ_Waocg8X*t89@jw#a0{@C4c#b`w*z$r|sFOdvYO z9{3-}NRl)V*F$XVc~Tm7uHhAUPW!gYl)eZaAH$TQJd1%iGtP?R`e?OEiV-C*USFf> zQ)d>W#6KI@oqj8|ZJ}6OerGs)$anuOx4unH6~c^Dca({~Ebkk)gD{`C#0~O6*@Tz( zV)_*+v4=80iV_VbcL!w16!zj*rz@xKU{PL06Zw?#uBI1T^@q2I)UxTn=R5a|&{%)X zu)~PG=EY~bik3Adjlh!Y0ov+cjkPlM831!y^HP7r;QNkD%<;G-j)md@HNkWEmBXrlHxU=oVx5f*#(hmb5OS zDsvE63(?+pGF+_Q;cQJbWGSABMWpe6Y0cjwwP=JZvbKhd3yI(ey!*SD*9_b2$9 zM2EDnsI}~%qMku1l~jVTE*@SsLgVY<7{?0nj8{MGo5S2;RLCRq2z^yAKByfWeJK!o zChwEUHkaTo-q*ZrokEr=C-0d`$kO$iDSE+UKhs6SNUcB%JnoxAa)cjRt=(+Sz3%Y} zo1p^l$%=+6bbQdmmF`E+XDdiBj#RmO7EEr_TdI60tvq*W|5RPyz&7_DqI4CQ>P?#} zK?1ggtn|INSF#mIM~&^|1n|^0-PhL<)S9m=_M? zj=|Pfc)Fr!og>s4Kni|VFoLIB9BL;p-772OZ2hQgyO`E4;H`5v?7$UVYLbB!96S$9 z6HBxuO)*hUsZ|s*Kf|c9ggv)s)() zH+D;3zj9U^8^aw4xA}3Q$DCIq^Y#C7D&Se}a4qgojHWuGR-F=>!ta&P83CF$a4Cat z)0BC3v3fA)$-GEw`s0zvGC@g@ihu?g#F;CCF8k^FDEcd=HNZShloXg0LC@p>hA7Z< z#Rq@_8<}9I-w#Hh>^$~j9d(r8KRN(@LQD#=J7m0JpTY#xSSca{YZ8@zYOj3vDkjvj z+dL+brjFL@Cx~bi`6B@AG)Cx3KpzAGx+2R-tqI^41AlSQ=UnW))+}Y46p{I$myx_+|SdY)$LjthE7D?%jw&8y?fW_p&*9!`(hL+Ye|bK!84^s?`jQZi{@VXhxh8v>2V6F=WL=l+t> zJu&`ceqOu}Go>;!&Uc~kxJl~3+}W3GUEW`HE}QhWSUT{yMW{`v6SsZswxRP)jK_I= zhG6JHUit>1$};ot{?|q5?%W)I3Lw+xdxrDjOQ@7wHnKIYVq;$uYO-{oA})0|ZOV@C zdoVXiIj|%~D6&GaX}=aY^zlk?C*qerR9&;IB51=fR7cc6`lmHN)3(~f6*{8@`}@_1{;i*QPQ zyHjVNec=`GxQThO(@}p{bWM^_p7&n(*H9gd34mN70Q8;)Wm@1nx!-a z;n~gWGFKhJRkv9?A^?U9P*nh~{ncRi4N1_GE)6Ioni$aJ-RK-z*lFF9Y6?akd*1my z(!XneCw~3kGAfRS8cgY-&KV6bw1qVHW~gIhuY0<@V*(0)juGdKpbOHXia`04`%CC1 zMR58WlYqF0+?sW4rC^+DmA!CpKg2R#E(}P-?*C?Lj$m+P6bm(kh-ZvXb|iRH2@a$- z%wjrS`i-mPM0{EtC*fRVt++8v!@~)=F&Uu_A#ziD#tPywzYgg;BLF&G4rR=%>?r`( zQC`Tn+Qr4?M8Eid+wk^4BHWO2_LBamQy;lZbc)Yc{_V#DzzoEHfdI=a0D%8;{N!*+ W(|ar%1~vi^4?S(e+clc@PyPove1psY diff --git a/plugins/org.eclipse.dd.doc.dsf/docs/dsf_concurrency_model.html b/plugins/org.eclipse.dd.doc.dsf/docs/dsf_concurrency_model.html deleted file mode 100644 index 9ef49c39f64..00000000000 --- a/plugins/org.eclipse.dd.doc.dsf/docs/dsf_concurrency_model.html +++ /dev/null @@ -1,432 +0,0 @@ - - - - - DSF Concurrency Model - - -

DSF Concurrency Model

-

-

-

Version -1.0
-Pawel Piech
-© 2006, Wind River Systems.  Release -under EPL version 1.0.

-

Introduction

-Providing a solution to concurrency problems is the primary design goal -of DSF.  To that end DSF imposes a rather draconian -restriction on services that use it: 1) -All service interface methods must be called using a single designated -dispatch thread, unless explicitly stated otherwise, 2) The dispatch -thread should never be used to make a blocking call (a call that waits -on I/O or a call that makes a long-running computation).  What -the first restriction effectively means, is that the dispatch thread -becomes a global "lock" that all DSF services in a given session -share with each other, and which controls access to most of services' -shared data.  It's important to note that multi-threading is still allowed -within individual service implementation. but when crossing the service -interface boundaries, only the dispatch thread can be used.  The -second restriction just ensures that the performance of the whole -system is not killed by one service that needs to read a huge file over -the network.  Another way of looking at it is that the -service implementations practice co-operative multi-threading using the -single dispatch thread.
-
-There are a couple of obvious side effects that result from this rule:
-
    -
  1. When executing within the dispatch thread, the state of the -services is guaranteed not to change.  This means that -thread-defensive programming techniques, such as making duplicates of -lists before iterating over them, are not necessary.  Also it's -possible to implement much more complicated logic which polls the state -of many objects, without the worry about dead-locks.
  2. -
  3. Whenever a blocking operation needs to be performed, it must be -done using an asynchronous method.  By the time the operation is -completed, and the caller regains the dispatch thread, this caller may -need to retest the relevant state of the system, because it could -change completely while the asynchronous operation was executing.
  4. -
-

The Mechanics

-

java.util.concurrent.ExecutorService
-

-DSF builds on the vast array of tools added in Java 5.0's -java.util.concurrent package (see http://java.sun.com/j2se/1.5.0/docs/guide/concurrency/index.html -for details), where the most important is the ExecutorService -interface.  ExecutorService -is a formal interface for submitting Runnable objects that will be -executed according to executor's rules, which could be to execute the -Runnable immediately, -within a thread pool, using a display thread, -etc.  For DSF, the main rule for executors is that they have -to use a single thread to execute the runnable and that the runnables -be executed in the order that they were submitted.  To give the -DSF clients and services a method for checking whether they are -being called on the dispatch thread, we extended the ExecutorService -interface as such:
-
public interface DsfExecutor extends ScheduledExecutorService
{
/**
* Checks if the thread that this method is called in is the same as the
* executor's dispatch thread.
* @return true if in DSF executor's dispatch thread
*/
public boolean isInExecutorThread();
}
-

java.lang.concurrent.Future -vs org.eclipse.dd.dsf.concurrent.Done

-The Done object -encapsulates the return value of an asynchronous call in DSF.  It -is actually merely a Runnable with -an attached org.eclipse.core.runtime.IStatus -object , but it can be extended by the services or clients to hold -whatever additional data is needed.   Typical pattern in how -the Done object is used, -is as follows:
-
Service:
public class Service {
void asyncMethod(Done done) {
new Job() {
public void run() {
// perform calculation
...
done.setStatus(new Status(IStatus.ERROR, ...));
fExecutor.execute(done);
}
}.schedule();
}
}

Client:
...
Service service = new Service();
final String clientData = "xyz";
...
service.asynMethod(new Done() {
public void run() {
if (getStatus().isOK()) {
// Handle return data
...
} else {
// Handle error
...
}
}
}
-The service performs the asynchronous operation a background thread, -but -it can still submit the Done runnable -with the executor.  In other words, the Done and other runnables can be -submitted from any thread, but will always execute in the single -dispatch thread.  Also if the implementation of the asyncMethod() is non-blocking, -it does not need to start a job, it could just perform the operation in -the dispatch thread.  On the client side, care has to be taken to -save appropriate state before the asynchronous method is called, -because by the time the Done is -executed, the client state may change.
-
-The java.lang.concurrent -package -doesn't already have a Done, -because the generic concurrent -package is geared more towards large thread pools, where clients submit -tasks to be run in a style similar to Eclipse's Jobs, rather than using -the single dispatch thread model of DSF.  To this end, the -concurrent package does have an equivalent object, Future.  -Future has methods that -allows the client to call the get() -method, and block while waiting for a result, and for this reason it -cannot -be used from the dispatch thread.  But it can be used, in a -limited way, by clients which are running on background thread that -still -need to retrieve data from synchronous -DSF methods.  In this case the code might look like the -following:
-
Service:
public class Service {
int syncMethod() {
// perform calculation
...
return result;
}
}

Client:
...
DsfExecutor executor = new DsfExecutor();
final Service service = new Service(executor);
Future<Integer> future = executor.submit(new Callable<Integer>() {
Integer call() {
return service.syncMethod();
}
});
int result = future.get();
-The biggest drawback to using Future -with DSF services, is that it does not work with -asynchronous methods.  This is because the Callable.call() -implementation -has to return a value within a single dispatch cycle.  To get -around this, DSF has an additional object called DsfQuery, which works like a Future combined with a Callable, but allows the -implementation to make multiple dispatches before setting the return -value to the client.  The DsfQuery object works as follows:
-
-
    -
  1. Client creates the query object with its own implementation of DsfQuery.execute().
    -
  2. -
  3. Client calls the DsfQuery.get() -method on non-dispatch thread, and blocks.
  4. -
  5. The query is queued with the executor, and eventually the DsfQuery.execute() method is -called on the dispatch thread.
  6. -
  7. The query DsfQuery.execute() -calls synchronous and asynchronous methods that are needed to do its -job.
  8. -
  9. The query code calls DsfQuery.done() -method with the result.
  10. -
  11. The DsfQuery.get() -method un-blocks and returns the result to the client.
    -
  12. -
-

Slow -Data Provider Example

-The point of DSF concurrency can be most easily explained through -a practical example.  Suppose there is a viewer which needs to -show data that originates from a remote "provider".  There is a -considerable delay in transmitting the data to and from the provider, -and some delay in processing the data.  The viewer is a -lazy-loading table, which means that it request information only about -items that are visible on the screen, and as the table is scrolled, new -requests for data are generated.  The diagram below illustrates -the -logical relationship between components:
-
-.
-

In detail, these components look like this:

-

-Table Viewer
-

The table viewer is the standard -org.eclipse.jface.viewers.TableViewer, -created with SWT.VIRTUAL -flag.  It has an associated content -provider, SlowDataProviderContentProvider) which handles all the -interactions with the data provider.  The lazy content provider -operates in a very simple cycle:

-
    -
  1. Table viewer tells content provider that the input has changed by -calling IContentProvider.inputChanged().  -This means that the content provider has to query initial state of the -data.
  2. -
  3. Next the content provider tells the viewer how many elements -there are, by calling TableViewer.setItemCount().
  4. -
  5. At this point, the table resizes, and it requests data values for -items that are visible.  So for each visible item it calls: ILazyContentProvider.updateElement().
  6. -
  7. After calculating the value, the content provider tells the table -what the value is, by calling TableViewer.replace().
  8. -
  9. If the data ever changes, the content provider tells the table to -rerequest the data, by calling TableViewer.clear().
  10. -
-Table viewer operates in the -SWT display thread, which means that the content provider must switch -from the display thread to the DSF dispatch thread, whenever it is -called by the table viewer, as in the example below:
-
    public void updateElement(final int index) {
assert fTableViewer != null;
if (fDataProvider == null) return;

fDataProvider.getExecutor().execute(
new Runnable() { public void run() {
// Must check again, in case disposed while redispatching.
if (fDataProvider == null) return;

queryItemData(index);
}});
}
-Likewise, when the content provider calls the table viewer, it also has -to switch back into the display thread as in following example, when -the content provider receives an event from the data provider, that an -item value has changed.
-
    public void dataChanged(final Set<Integer> indexes) {
// Check for dispose.
if (fDataProvider == null) return;

// Clear changed items in table viewer.
if (fTableViewer != null) {
final TableViewer tableViewer = fTableViewer;
tableViewer.getTable().getDisplay().asyncExec(
new Runnable() { public void run() {
// Check again if table wasn't disposed when
// switching to the display thread.
if (tableViewer.getTable().isDisposed()) return; // disposed
for (Integer index : indexes) {
tableViewer.clear(index);
}
}});
}
}
-All of this switching back and forth between threads makes the code -look a lot more complicated than it really is, and it takes some -getting used to, but this is the price to be paid for multi-threading. -Whether the participants use semaphores or the dispatch thread, the -logic is equally complicated, and we believe that using a single -dispatch thread, makes the synchronization very explicit and thus less -error-prone.
-

Data Provider Service

-

The data provider service interface, DataProvider, is very similar -to that of the lazy content provider.  It has methods to:

-
    -
  • get item count
  • -
  • get a value for given item
  • -
  • register as listener for changes in data count and data values
  • -
-But this is a DSF interface, and all methods must be called on the -service's dispatch thread.  For this reason, the DataProvider interface returns -an instance of DsfExecutor, -which must be used with the interface.
-

Slow Data Provider

-

The data provider is actually implemented as a thread which is an -inner class of SlowDataProvider -service.  The provider thread -communicates with the service by reading Request objects from a shared -queue, and by posting Runnable objects directly to the DsfExecutor but -with a simulated transmission delay.  Separately, an additional -flag is also used to control the shutdown of the provider thread.

-To simulate a real back end, the data provider randomly invalidates a -set of items and notifies the listeners to update themselves.  It -also periodically invalidates the whole table and forces the clients to -requery all items.
-

Data and Control Flow
-

-This can be described in following steps:
-
    -
  1. The table viewer requests data for an item at a given index (SlowDataProviderContentProvider.updateElement).
    -
  2. -
  3. The table viewer's content provider executes a Runnable in the DSF -dispatch thread and calls the data provider interface (SlowDataProviderContentProvider.queryItemData).
  4. -
  5. Data provider service creates a Request object, and files it in a -queue (SlowDataProvider.getItem).
  6. -
  7. Data provider thread de-queues the Request object and acts on it, -calculating the value (ProviderThread.processItemRequest).
  8. -
  9. Data provider thread schedules the calculation result to be -posted with DSF executor (SlowDataProvider.java:185).
  10. -
  11. The Done callback sets the result data in the table viewer (SlowDataProviderContentProvider.java:167).
    -
  12. -
-

Running the example and full sources

-This example is implemented in the org.eclipse.dd.dsf.examples -plugin, in the org.eclipse.dd.dsf.examples.concurrent -package. 
-
-To run the example:
-
    -
  1. Build the test plugin (along with the org.eclipse.dsdp.DSF plugin) -and launch the PDE. 
    -
  2. -
  3. Make sure to add the DSF -Tests action set to your current perspective.
  4. -
  5. From the main menu, select DSF -Tests -> Slow Data Provider.
  6. -
  7. A dialog will open and after a delay it will populate with data.
  8. -
  9. Scroll and resize dialog and observe the update behavior.
  10. -
-

Initial Notes
-

-This example is supposed to be representative of a typical embedded -debugger design problem.  Embedded debuggers are often slow in -retrieving and processing data, and can sometimes be accessed through a -relatively slow data channel, such as serial port or JTAG -connection.  But as such, this basic example presents a couple -of major usability problems
-
    -
  1. The data provider service interface mirrors the table's content -provider interface, in that it has a method to retrieve a single piece -of data at a time.  The result of this is visible to the user as -lines of data are filled in one-by-one in the table.  However, -most debugger back ends are in fact capable of retrieving data in -batches and are much more efficient at it than retrieving data items -one-by-one.
  2. -
  3. When scrolling quickly through the table, the requests are -generated by the table viewer for items which are quickly scrolled out -of view, but the service still queues them up and calculates them in -the order they were received.  As a result, it takes a very long -time for the table to be populated with data at the location where the -user is looking. 
    -
  4. -
-These two problems are very common in creating UI for embedded -debugging, and there are common patterns which can be used to solve -these problems in DSF services.
-

Coalescing

-Coalescing many single-item requests into fewer multi-item requests is -the surest way to improve performance in communication with a remote -debugger, although it's not necessarily the simplest.  There are -two basic patterns in which coalescing is achieved:
-
    -
  1. The back end provides an interface for retrieving data in large -chunks.  So when the service implementation receives a request for -a single item, it retrieves a whole chunk of data, returns the single -item, and stores the rest of the data in a local cache.
  2. -
  3. The back end providers an interface for retrieving data in -variable size chunks.  When the service implementation receives a -request for a single item, it buffers the request, and waits for other -requests to come in.  After a delay, the service clears the buffer -and submits a request for the combined items to the data provider.
  4. -
-In practice, a combination of the two patterns is needed, but for -purpose of an example, we implemented the second pattern in the -"Input-Coalescing Slow Data Provider" (InputCoalescingSlowDataProvider.java).  -
-

Input Buffer

-

The main feature of this pattern is a buffer for holding the -requests before sending them to the data provider.  In this -example the user requests are buffered in two arrays: fGetItemIndexesBuffer and fGetItemDonesBuffer.  The -DataProvider.getItem() -implementation is changed as follows:

-
    public void getItem(final int index, final GetDataDone<String> done) {
// Schedule a buffer-servicing call, if one is needed.
if (fGetItemIndexesBuffer.isEmpty()) {
fExecutor.schedule(
new Runnable() { public void run() {
fileBufferedRequests();
}},
COALESCING_DELAY_TIME,
TimeUnit.MILLISECONDS);
}

// Add the call data to the buffer.
// Note: it doesn't matter that the items were added to the buffer
// after the buffer-servicing request was scheduled. This is because
// the buffers are guaranteed not to be modified until this dispatch
// cycle is over.
fGetItemIndexesBuffer.add(index);
fGetItemDonesBuffer.add(done);
}

-And method that services the buffer looks like this:
-
    public void fileBufferedRequests() { 
// Remove a number of getItem() calls from the buffer, and combine them
// into a request.
int numToCoalesce = Math.min(fGetItemIndexesBuffer.size(), COALESCING_COUNT_LIMIT);
final ItemRequest request = new ItemRequest(new Integer[numToCoalesce], new GetDataDone[numToCoalesce]);
for (int i = 0; i < numToCoalesce; i++) {
request.fIndexes[i] = fGetItemIndexesBuffer.remove(0);
request.fDones[i] = fGetItemDonesBuffer.remove(0);
}

// Queue the coalesced request, with the appropriate transmission delay.
fQueue.add(request);

// If there are still calls left in the buffer, execute another
// buffer-servicing call, but without any delay.
if (!fGetItemIndexesBuffer.isEmpty()) {
fExecutor.execute(new Runnable() { public void run() {
fileBufferedRequests();
}});
}
}
-The most interesting feature of this implementation is the fact that -there are no semaphores anywhere to control access to the input -buffers.  Even though the buffers are serviced with a delay and -multiple clients can call the getItem() -method, the use of a single -dispatch thread prevents any race conditions that could corrupt the -buffer data.  In real-world implementations, the buffers and -caches that need to be used are far more sophisticated with much more -complicated logic, and this is where managing access to them using the -dispatch thread is ever more important.
-

Cancellability

-

Table Viewer

-

-Unlike coalescing, which can be implemented entirely within the -service, cancellability requires that the client be modified as well -to take advantage of this capability.  For the table viewer -content provider, this means that additional features have to be -added.  In CancellingSlowDataProviderContentProvider.java -ILazyContentProvider.updateElement() -was changes as follows:
-
    public void updateElement(final int index) {
assert fTableViewer != null;
if (fDataProvider == null) return;

// Calculate the visible index range.
final int topIdx = fTableViewer.getTable().getTopIndex();
final int botIdx = topIdx + getVisibleItemCount(topIdx);

fCancelCallsPending.incrementAndGet();
fDataProvider.getExecutor().execute(
new Runnable() { public void run() {
// Must check again, in case disposed while redispatching.
if (fDataProvider == null || fTableViewer.getTable().isDisposed()) return;
if (index >= topIdx && index <= botIdx) {
queryItemData(index);
}
cancelStaleRequests(topIdx, botIdx);
}});
}
-Now the client keeps track of the requests it made to the service in fItemDataDones, and above, cancelStaleRequests() iterates -through all the outstanding requests and cancels the ones that are no -longer in the visible range.
-

Data Provider Service

-

-

The data provider implementation -(CancellableInputCoalescingSlowDataProvider.java), -builds on top of the -coalescing data provider.  To make the canceling feature useful, -the data provider service has to limit the size of the request -queue.  This is because in this example which simulates -communication with a target and once requests are filed into the -request -queue, they cannot be canceled, just like a client can't cancel -request once it sends them over a socket.  So instead, if a flood -of getItem() -calls comes in, the service has to hold most of them in the coalescing -buffer in case the client decides to cancel them.  Therefore the -fileBufferedRequests() -method includes a simple check before servicing -the buffer, and if the request queue is full, the buffer servicing call -is delayed.

-
        if (fQueue.size() >= REQUEST_QUEUE_SIZE_LIMIT) {
if (fGetItemIndexesBuffer.isEmpty()) {
fExecutor.schedule(
new Runnable() { public void run() {
fileBufferedRequests();
}},
REQUEST_BUFFER_FULL_RETRY_DELAY,
TimeUnit.MILLISECONDS);
}
return;
}
-Beyond this change, the only other significant change is that before -the requests are queued, they are checked for cancellation.
-

Final Notes
-

-The example given here is fairly simplistic, and chances are that the -same example could be implemented using semaphores and free threading -with perhaps fewer lines of code.  But what we have found is that -as the problem gets bigger, the amount of -features in the data provider increases, the state of the -communication protocol gets more complicated, and the number of modules -needed in the service layer increases, using free threading and -semaphores does not safely scale.  Using a dispatch thread for -synchronization certainly doesn't make the inherent problems of the -system less complicated, but it does help eliminate the race conditions -and deadlocks from the overall system.
-

Coalescing and Cancellability are both optimizations.  Neither -of these optimizations affected the original interface of the service, -and one of them only needed a service-side modification.  But as -with all optimizations, it is often better to first make sure that the -whole system is working correctly and then add optimizations where they -can make the biggest difference in user experience. 

-

The above examples of optimizations can take many forms, and as -mentioned with coalescing, caching data that is retrieved from the data -provider is the most common form of data coalescing.  For -cancellation, many services in DSF build on top of other services, -which means that even a low-level service can cause a higher -level service to retrieve data, while another event might cause it to -cancel those requests.  The perfect example of this is a Variables -service, which is responsible for calculating the value of expressions -shown in the Variables view.  The Variables service reacts to the -Run Control service, which issues a suspended event and then requests a -set of variables to be evaluated by the debugger back end.  But as -soon as a resumed event is issued by Run Control, the Variables service -needs to cancel  the pending evaluation requests.
-

-
-
- - diff --git a/plugins/org.eclipse.dd.doc.dsf/docs/dsf_data_model.html b/plugins/org.eclipse.dd.doc.dsf/docs/dsf_data_model.html deleted file mode 100644 index bd1b40112e6..00000000000 --- a/plugins/org.eclipse.dd.doc.dsf/docs/dsf_data_model.html +++ /dev/null @@ -1,286 +0,0 @@ - - - - - DSF Data Model - - -

DSF Data Model

-Version -1.0
-Pawel Piech
-© 2006, Wind River Systems.  Release -under EPL version 1.0.
-

Overview

-

The data model aspect of DSF is only partially complete as compared -to the Concurrency and Services Models.  The goals for its design -are:
-

-
    -
  1. Separate the structure of the -data in the services from the model used for presentation in views.  -This seems like a basic model-viewer separation, which is something -that we theoretically have already.  But in reality the current -platform debug model APIs closely correspond to how the data is -laid out in debug views, and even with the flexible hierarchy views it -is -difficult to provide alternative layouts.
  2. -
  3. Allow for a modular -implementation of services that contribute to the data model.   -
    -
  4. -
  5. Perform well with large -data sets.
  6. -
  7. Make the data model interfaces -convenient to use by other services as well as by views.  -Some interim designs of DSF data model APIs were very well suited for -populating views (though asynchronous) content and label provider, but -were very difficult to use for other purposes, such as by another -service, or a client that creates a dialog.  This led to services -implementing two sets of interfaces for the same data, which was more -expensive to develop and maintain.
    -
  8. -
  9. Allow for easy changes to the -layout of data in views.  This is from the point of view of -a debugger implementer that would like to modify the standard layout of -debugger data.
    -
  10. -
  11. Allow the users to modify the -layout of data in views.  And this is a logical extension -of the previous goal.
  12. -
-

-That's a pretty ambitious set of goals to keep in mind, which partly -explains why the design is not fully complete yet.  In particular, -the last goal doesn't have any implementation at this point.  But -other than that the, we believe that our current design mostly -meets the other goals.  It remains to be seen how well it will -hold up -beyond a prototype implementation.
-

The DSF data model is divided into two parts: a non-UI part that -helps services expose data in a consistent form, and a UI part that -helps viewers present the data.  They are described separately in -the two sections below.
-

-

Timers Example

-

A "timers -example" is included with the DSF plugins which -demonstrates the use of data model and view model -APIs.   It is probably much easier to digest this document -when referring to this example for usage.
-

-

Data Model API (org.eclipse.dd.dsf.model)
-

-As stated before, the aim of this API is to allow services to provide -data with just enough common information, so that it can be easily -presented in the view, but with a simple enough design, so that the -data can be accessed by non-viewer clients.  The type of data in -services can vary greatly from service to service, some data for -example:
-
    -
  • service data might be extremely large and thus may only be -retrieved from a back end process in small chunks, while some service -data might be always stored locally in the service
    -
  • -
  • data might take a very long time to retrieve, or it could be -instantaneous
    -
  • -
  • some services might support canceling of the request while it is -being processed, while other services might not
    -
  • -
  • some data may change very frequently, other data may not change -at all
    -
  • -
-The data model API tries to find a common denominator for these -divergent properties and imposes the following restrictions:
-
    -
  1. Each "chunk" of data that comes from a service has a -corresponding IDataModelContext (Data Model Context) -object.
    -
  2. -
  3. The DM-Context objects are to be generated by the data model services (IDataModelService) with either -synchronous or asynchronous methods, and taking whatever arguments are -needed.  Put differently, how DM-Contexts are created is up to the -service.
  4. -
  5. The service will provide a method for retrieving each "chunk" of -model data (IDataModelData) -using a method that requires no other arguments besides the DM-Contexts.
  6. -
-

DM-Context (IMContext)
-

-The DM-Contexts are the most -important part of this design, so they warrant a closer look.  The -interface is listed below:
-
    public interface IDataModelContext<V extends IDataModelData> extends IAdaptable {
public String getSessionId();
public String getServiceFilter();
public IDataModelContext[] getParents();
}
-First of all the object extends IAdaptable, -which allows clients to use these objects as handles that are stored -with UI components.  However the implementation of IDataModelData.getAdapter()  -presents a particular challenge.  If the standard platform method -of retrieving an adapter is used (PlatformObject.getAdapter()), -then there can only be one adapter registered for a given DM-Context class, -which has to be shared by all the DSF sessions that are running -concurrently.  Thus one debugger that implements a IStack.IFrameDMContext, would have to -have the same instance of -IAsynchronousLabelAdapter as another debugger implementation -that is running at the same time.  To overcome this problem, DSF -provides a method for registering adapters with a session using DsfSession.registerModelAdapter(), -instead of with the platform (Platform.getAdapterManager().registerAdapters()).  -
-

The getSessionId() -method serves two purposes.  First, it allows the -IAdapter.getAdapter() -implementation to work as described above. Second, it allows clients to -access the correct dispatch thread (DsfSession.getSession(id).getExecutor()) -for calling the service that the DM-Context originated from. 
-

-

The getServiceFilter() -method is actually included to allow future development.  It is -intended to allow the client to precisely identify the service that -the DM-Context originated from, without having to examine the exact class type -of the DM-Context.  But this functionality will not really be needed -until we start writing generic/data-driven clients.
-

-

The getParents() -method allows the DM-Context to be connected together into something that can -be considered a "model".  Of course, most debugger data objects, -require the context of other objects in order to make sense: stack -frame is meaningless without the thread, debug symbols belong to a -module, which belongs to a process, etc.  In other words, there is -some natural hierarchy to the data in debug services which needs to be -accessible through the data model APIs.  This hierarchy may be the -same hierarchy that is to be shown in some debug views, but it doesn't -have to be.  More importantly, this hierarchy should allow for a -clean separation of debug services, and for a clear dependency graph -between these services.

-

View Model API (org.eclipse.dd.dsf.ui.model)
-

-This is the component which allows the DSF data model to be presented -in -the views with different/configurable layouts.  It is tightly -integrated with the recently added (and still provisional) -flexible-hierarchy viewers in the org.eclipse.debug.ui -plugin (see EclipseCon 2006 presentation -for more details).  Actually, the platform flexible hierarchy -framework already provides all the adapter interfaces needed to present -the DSF data model in the viewers, and it is possible to do -that.  However the flexible hierarchy views were not specifically -designed for DSF, and there are a few ugly patterns that emerge when -using them with DSF data model interfaces directly:
-
    -
  • Because of the nature of IAdaptable pattern, the flexible -hierarchy label and content adapters have to have a single instance -that works for all views that the objects appear in.  This leads -to a lot of if-else statements, which make the implementation difficult -to follow.
    -
  • -
  • There is a single adapter for all DSF data model elements in the -tree (from the same session), so the adapters have even more if-else -statements to handle the different elements in the viewer.
  • -
  • Most of DSF adapter work needs to be performed in the dispatch -thread, so each handler starts with a re-dispatch call.
  • -
  • In all of this, the logic which determines the hierarchy of -elements in the viewer is very hard to follow.
  • -
-The view model API tries to address these issues in the following way:
-
    -
  1. It divides the adapter work for different views in separate ViewModelProvider objects.
  2. -
  3. It defines the view layout in an object-oriented manner using the - IViewModelLayoutNode -objects.
  4. -
  5. It consolidates the logic of switching to dispatch thread in one -place, and allows the ViewModelProvider -objects to work only in dispatch thread.
    -
  6. -
-

IViewModelLayoutNode

-The core of the logic in this design lies in the implementation of the IViewModelLayoutNode objects. -This interface is listed below:
-
public interface IViewModelLayoutNode {
public IViewModelLayoutNode[] getChildNodes();
public void hasElements(IViewModelContext parentVmc, GetDataDone<Boolean> done);
public void getElements(final IViewModelContext parentVmc, GetDataDone<IViewModelContext[]> done);
public void retrieveLabel(IViewModelContext vmc, final ILabelRequestMonitor result);
public boolean hasDeltaFlags(IDataModelEvent e);
public void buildDelta(IDataModelEvent e, ViewModelDelta parent, Done done);
public void sessionDispose();
}
-The getChildNodes() -method allows these layout nodes to be combined into a tree structure, -which mimics the layout of elements in the view.  What the -children are depends on the implementation, some may be configurable -and -some may be fixed.
-
-The hasElements() -and getElements() -methods generate the actual elements that will appear in the -view.  The methods are analogous to the flexible hierarchy API -methods: IAsynchronousContentAdapter.isContainer() -and IAsynchronousContentAdapter.retrieveChildren() -and are pretty straightforward to implement. Also retrieveLabel() -is directly analogous to -IAsynchronousLabelAdapter.retrieveLabel(). 
-
-The hasDeltaFlags() -and buildDelta() -are used to generate model deltas in response to service events. These -are discussed in the next section.
-
-Finally, in most cases the elements in the views correspond -directly to an IDataModelContext -(DM-Context) objects of a specific type.  In those cases, the DMContextVMLayoutNode -abstract class implements the common functionality in that pattern.
-

Model deltas

-The hasDeltaFlags() and buildDelta() methods are used -to implement the IModelProxy adapter, -and are the most tricky aspect of this design.  The difficulty is -that the flexible hierarchy views require that the IModelProxy translate data -model-specific events, into generic model deltas that can be -interpreted by the viewer.  The deltas (IModelDelta) are tree -structures which are supposed to mirror the structure of nodes in the -tree, and which contain flags that tell the viewer what has changed in -the view and how.*  This means that if the -model proxy receives an event for some IDataModelContext (DM-Context) object, -it needs to know if this object is in the viewer's tree, and what is -the full path (or paths) that leads to this object. 
-

The model delta is generated by first calling the top layout node's hasDeltaFlags() with the -received event, which then can either return true or ask any of its -children if they have deltas (which in turn returns true or calls its -children, etc).  If a node returns true for hasDeltaFlags(), then the -asynchronous buildDelta() -is called with the event and a parent delta node, to generate the delta -elements and flags for its node.  Once the layout node generates -its delta objects, it still needs to call its children, which in turn -add their delta information, and so on.
-

-

* It's not strictly true that a full path to -an element always has to be present for model delta's to work.  If -the full path is not present, the viewer will try to find the element -using an internal map that it keeps of all of the elements it -knows.  -But since the viewer is lazy loading, it is possible (and likely) that -the element affected by an event is not even known to the viewer at -time of the event, and for some delta actions, IModelDelta.SELECT and IModelDelta.EXPAND, this is not -acceptable.
-

- - diff --git a/plugins/org.eclipse.dd.doc.dsf/docs/dsf_mi_instructions.html b/plugins/org.eclipse.dd.doc.dsf/docs/dsf_mi_instructions.html deleted file mode 100644 index 7d6e2b51153..00000000000 --- a/plugins/org.eclipse.dd.doc.dsf/docs/dsf_mi_instructions.html +++ /dev/null @@ -1,135 +0,0 @@ - - - - - GDB/MI Debugger on top of DSF - Instructions - - -

GDB/MI Debugger implementation based on DSF

-
-
-

Buiding and Running Instructions
-

-

To build:

-
    -
  1. Install the latest milestone of Eclipse 3.3 SDK
  2. -
  3. Install the latest milestone of CDT 4.0
  4. -
  5. Install and configure gdb (cygwin on windows)
  6. -
  7. Check out following projects from - /cvsroot/dsdp/org.eclipse.dd.dsf/plugins -
  8. -
      -
    • org.eclipse.dd.dsf
    • -
    • org.eclipse.dd.dsf.ui
    • -
    • org.eclipse.dd.dsf.debug
    • -
    • org.eclipse.dd.dsf.debug.ui
    • -
    • org.eclipse.dd.dsf.mi.core
    • -
    • org.eclipse.dd.dsf.mi.ui.
    • -
    -
-

To run:

-
    -
  1. Create a new "Managed make build project" called "hello".
  2. -
  3. Create a simple hello.c source file:
  4. -
-
-
#include <stdio.h>
int main(void) {
printf("Hello world");
}
-
-
    -
  1. Build the project.
    -
  2. -
  3. Create a new "DSF C/C++ Local Application"  launch -configuration (one with the pink icon) and set the executable and entry -point to "main"
    -
  4. -
  5. Launch and step through.
  6. -
  7. If the "source not found" page appears, the a path mapping needs -to be created.  This is an issue with latest cygwin gdb.
    -
  8. -
      -
    1. Click on the "Edit source lookup" button in the editor, or -right click on the launch node in Debug View and select "Edit source -lookup"
    2. -
    3. Click on the "Add..." button
    4. -
    5. Select "Path Mapping" and click OK.
      -
    6. -
    7. Select the new "Path Mapping" source container and click the -"Edit..." button.
    8. -
    9. Once again, click the "Add..." button to create a mapping.
    10. -
    11. Enter the path to map from.  Look at the stack frame label -in Debug view, if the filename is something like -"/cygdrive/c/workspace/hello/hello.c", enter the path to the first real -directory "/cygdrive/c/workspace".
    12. -
    13. Enter the correct path to the directory entered above, in the -file system.  In example above, it would be "C:\workspace".
    14. -
    15. Click OK three times and you'll be back in Kansas.... ehm Debug -view that is.
    16. -
    17. If the source doesn't show up right away, try stepping once.
    18. -
    -
-
-

Supported Platforms
-

-Currently only Windows with cygwin GDB is supported. -
-
-
-

Current Features
-

-
    -
  • Launching
  • -
      -
    • The "DSF C/C++Local Application" is the standard CDT launch -configuration minus some of the features. 
      -
    • -
    • What is NOT working here is
      -
    • -
        -
      • Debugger tab: the selection of debugger back ends (gdb/mi, -Cygwin gdb debugger, etc.), tThe implementation is currently hard-wired -for Cygwin,
      • -
      • Debugger tab: Debugger Options section
      • -
      -
    -
  • Debug view
  • -
      -
    • Single thread debugging only.
    • -
    • Terminating
      -
    • -
    • Stepping
      -
    • -
    • Resume/Suspend
      -
    • -
    -
  • Console support
    -
  • -
      -
    • GDB process output
    • -
    • NO user process console support
      -
    • -
    -
  • Breakpoints
  • -
      -
    • Basic CDT breakpoint support implemented
    • -
        -
      • no filtering support,
        -
      • -
      • no advanced options (hardware, temporary, etc)
      • -
      • no watchpoints
        -
      • -
      -
    -
  • Variables
  • -
      -
    • not yet
      -
    • -
    -
-
-
Updated Aug 25th, 2006
-
- - diff --git a/plugins/org.eclipse.dd.doc.dsf/docs/dsf_services_model-1.png b/plugins/org.eclipse.dd.doc.dsf/docs/dsf_services_model-1.png deleted file mode 100644 index b593371ee805527e885bd02a05399a15d30ac5a0..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 2224 zcma)8eK^x=A0O%{VqH#NuF211tK@hqHpkoYHb&M&N%Bq^c}rWgmA4!=!dgXC^7c?V zNwkPfIu#QSerpO7t7leQ4vyJjdC9ZhuIK8X=X(D5e!lnpxv%@Wzu)WnUZ2mMw;zwy zP=l#KAP@~NPmCV~q6{ixtFnrMss1_nu_7rE{jjde&X3c&6q}9vaR=QMd3=0)ZEcOV z14jQ$p70DLLLllte-0(c)$4i?$OdOGjLX5({OR#O@Iw17KZQrt`%V5my(dz;{ARe$ zeM^LNkMALbQx4~Mgp=q&h#C`QW+CUiGU)4Qq>Io+na@tUJT+yiGAyPt>!%+9$Ubxo zx}?>Sa2{}0(LjRVfw@68^Jfx#>?Aj1;L{D&ZwCU8`yX(joqdKK1!XPiP*JNnV*@Fv zy|Qhp5Ydk^ebrS{o}2}Xo@GDSqgRbw>>6Z3^S9_%a52w3)?FfQP==3L%S|^k zD2H2qLindPJ@i5q=Y{o*ww=y9^GuVK{H;Qr-*i6Sm2l;&m)17@x2hjjjp`k~YxUz# zc}FX3h?T8c1WJxrH_sANIm>oDMAIn);9O6-1GaG&dNbJ>fzI@pG*SQsQrL6pD&X*- zn3`?^K|gYgcJV0(Zfn!hFO~wG+o~(BV8i^oRD@LrWgPSJqKw)Sjvyc!ULZF@olH7A$|?_6B_Ik!idZoJ7hOMCRc zajdhx(5`OA;IQ#-QO#4JJT!+~XD64&@q>EBZsfnr`=XkAtUa34dTaa6?tlM*e;FW? zFFZy^ri&teEvjKZ;mQr>2Dqj;CZ=ME==FU8ko-Q***I`;yf71yB zOfjv_f>bgM2W5!06g>4fUuh9xaH-QG__HBy)j zS3BZLxv)K-=WWzaTH#oMHZ@UXIZekGz{eS?KZ_-dkP0Ksjil1>JS-W8CENC@ET#9> zglq|MoqVesgqipS?COZi?;I)FLRPPIFX5Zt!6{p2hxj7S@6us3c<4et3+A}TMOnJhw)~Lf;`vmBz9@mf4x9kbH4uxwTt;Qzh)|J5?Pi#zx}5yvul=y+qh#{@>aPz1vJR2Tu2 zNiQ?S6L2Vl+T%89B;8D=v1cnQa<2iM_Nx&WP~Pqnu4cz*Nz=GSg;%rw`5U#I z1(7nv8isbGu;X(Hkw{7jFs3e^K1@C()1NGa7 zwND;Pya*f>3}5^BiH8fSNWbvf+MytQ_6|C}xmv%>BMaZ&7@yZHN zRsLRuuUSnbBzUlBHm$A>t5;sE2Aw+lK+k`Zr+X9q-a;Cjx<}AZ0X1~^>S8BYW8Q6;Of-* zg-i~TCB9i{;-((8U3^o|&G>0xNIz{>-F`m&tBuTRj{Dc2X%VDO05#=@VzciETAm>b zqF00hj%AfP z_@VnAmyi18)Vt6k$AEg~ld~Hk$qu@CF@$Xc$Rc#+7Y8rN{_KxtDnB^-4Ol}j5LUBZ g;|H5xS{7y)nhbBn-kA^o{=cytjBXo<(sU`1*Z&!C~In}D#Xz%h1_Eh=N(NU#R`Mx~McrJL% z8<`9M=o-zL8ld>bG5|mm?c)Ip#_}YBXNlMSjGsu=OxEl(*>Mnb;J`CP^qM1sCP8{r znGMuBosPqg5t9qcuZRp5n*$$RSC)gEFt3KC$@&4tebmTTyNIw%O+7O(iu2QpG^9A} zS>mrC5^iZV5c=b~5h+2#417+jc)Nx_1(tV+6Ke)0X!=h9zCmv;zA}Td=;xt{f7`L7aM~3ftJ{P#Hg2^e7l(@P3`vd6YQJ}+KQjf#q7(C0 z@yzzi&l3}*wx!@Fm6-dr56xeK$f3gzgb{j@j|~M@I(#x$|EAC3kk(5B3*x`@BI&<5 z=f_SQ?G?BfP5BO$9FqSzwlS6=4euMX@F7idJU3`cf<9Q#D6qJ()3>baeD=6aI3wJ} zjec<jPkCso2Oh*iAF{p{fRBrPESNcBKwwm4#?Z^cPU{rGE#gnI*z1~7!dROHCjn7*c^ z36)sWs0BVNdco2{DR`m~4fGB|AekLFEl6No{RLT43=n7x=@ZWBIbPY#Z2ZpFumS+2(aQ z`bYUhC=VGY;|7$V$&GnXl(F1oq4XCj12b*x;S#A?U%jD|sCE|M3s&ZXtFp?VxrVZI1v$X!FS!(eIH3KrXoRQ3fa#UH2C<+LtiOO%e`9Id}5w1ZKb^omvKU-5(PD>H1kLN)o_(xRep z3fG$S(UU~BUh>AWt6jaT6^z=vEyFNJd9l=J)-sHdDVoxvTg+a|XsTN5hD%qyoTeaD zJgQ2Uo75C!FO#>ercC$k`IAqd)cK{$ zJP5tbYiVyQDR-&9ZO^CmraQeHy~rs}rp$L;u6bb`n$T80X4ys-(AP<#4upLYuOFK5 z-a>Y8F=b}gfR2j26uDUXGpy{O1A=wQT0cb`<-QzZeQsIbeH`r`Rn%q2meg5h>|K(! zflHk3m=9Lf0kdVvbR^gP1$>rl@4w*GYb5M{ny6}-@f#Y)(RcWxu55s#D(1h+49iL}0{dHIXiJYrM)Vp$7$k;>~} zAZMN2)bo}xt2EQ%Z#eg6rD zUtnN>=8yAgUC)N>+&KegmZsX%<3V&>A-dnmYs>gZr|%SZJWF4YY+hgcPCw~PD*6s< zvMhXJ9ws!TX7JMv*Ud90M8%PaX@>K0xH!sRB%FVGc;L0~>>)M+!3Yy-3$XO=*BhDo zF3gGd!MKYd76N*Y3uneYjM27EQd|p%f=Kf8zT@I3iGY)PNTSK9uNKd^+Sy~MgLw2q z?VRvIo8YqGJ5@PhC2J8hRvqDe>n%t+nqbd67uS`|a{5;HQwwC4&G8U9;2(l7v+JrU m7HB~|*r(6@=O9qB#n7yhy{R`8i3O^O1n}_;@Tk}xo$(jIq)#UR diff --git a/plugins/org.eclipse.dd.doc.dsf/docs/dsf_services_model.html b/plugins/org.eclipse.dd.doc.dsf/docs/dsf_services_model.html deleted file mode 100644 index 8380a0e4ae3..00000000000 --- a/plugins/org.eclipse.dd.doc.dsf/docs/dsf_services_model.html +++ /dev/null @@ -1,363 +0,0 @@ - - - - - DSF Services Model - - -

DSF Services Model

-
-Version -1.0
-Pawel Piech
-
© 2006, -Wind River Systems.  Release under EPL -version 1.0.
-
-

Debugger Services Framework (DSF) is primarily a service framework -defining rules for how -services should be registered, discovered, organized into functional -groups, communicated with, and started/ended.  These rules help to -organize the services into a functional system that efficiently -abstracts various debugger back end capabilities. 

-

DSF services build on top of the OSGI services framework, so -it's important to understand OSGI services before looking at DSF -itself.  For an overview of OSGI including services, see the presentation -on OSGI from EclipseCon 2006.  For detailed information, see -OSGI javadocs, primarily: org.osgi.frameworkServiceRegistration, -BundleContext, ServiceReference, Filter, and ServiceTracker. -

-

Services
-

-In OSGI any class can be registered as a service.  In DSF, -Services must implement the IDsfService  -interface, which requires that the service -provide:
-
    -
  1. Access to the DsfExecutor that -has to be used to access service methods.
  2. -
  3. Full list of properties used to uniquely identify the service in -OSGI.
  4. -
  5. Startup and shutdown methods.
  6. -
-For the first two items, a service must use the data it received from -its constructor.  For the third item, a service must register and -unregister itself with OSGI.  But beyond that, this is all that -services have in common, everything else is up to the specific service -interface.
-

Sessions (org.eclipse.dd.dsf.service.DsfSession)
-

-DSF services are organized into logical groups, called -sessions.  Sessions are only necessary because we want multiple -instances of systems built with DSF services to run at the same -time  This is because there is only a single OSGI service -registry, so if multiple services are registered with a given class -name, OSGI will not be able to distinguish between the two based on the -class name alone.  So there is an additional property which is -used by every DSF service when registering with OSGI, IDsfService.PROP_SESSION_ID.  -
-

A Session object -(TODO: link javadoc) has the following data associated with it:
-

-
    -
  • Session ID - A String object that is unique -among all other sessions.  Its ID is used by services as the IDsfService.PROP_SESSION_ID -property, and it is used by the client to obtain the Session object instance.
  • -
  • DsfExecutor -- Each session has a single executor.  This means that all the -services in a single session share the same executor and dispatch -thread, and conversely it means that when operating in the dispatch -thread, the state of all the services in a session will remain the same -until the end of a dispatch.  Note: multiple sessions could share the same DsfExecutor.
  • -
  • Service startup counter -- An integer counter which is read and incremented by every service -that is started in a session.  This counter is used to determine -the dependency order among services, which is used by events.
  • -
  • Event listener list -- This will be covered in the "Events" section.
  • -
  • Adapter list - A -list of adapters, providing functionality analogous to runtime's org.eclipse.core.internal.runtime.AdapterManager.  -Sessions need to manage their own lists of adapters, so that IAdapter objects which -originate from DSF services can provider different adapters, based -on the session that they originate from.  This feature is covered -in detail in the "DSF Data Model" document.
    -
  • -
-

The Session class also has a number of static features used to -manage Session objects:

-
    -
  • Session ID counter -- Used to generate new session IDs.
  • -
  • Methods for starting -and ending sessions
    -
  • -
  • Session started/ended -event listener list - This allows clients to be notified when -sessions are created or terminated, which is used mostly for clean-up -purposes.
    -
  • -
-

Startup/Shutdown

-Managing the startup and shutdown process is often the most complicated -aspect of modular systems.  The details of how the startup and -shutdown processes should be performed are also highly dependent on the -specifics of the system and service implementations.  To help -with this, DSF provides two simple guidelines:
-
    -
  1. There should be a clear -dependency tree of all services within a session - When the -dependencies between services are clearly defined, it is possible to -bring-up and bring-down the services in an order that guarantees each -running service can access all of the services that it depends on.
  2. -
  3. There needs to be a -single point of control, which brings up and shuts down all the -services. - In other words, services should not initialize or -shut-down themselves, based on some global event that they are all -listening to.  But rather an external piece of logic needs to be -in charge of performing this operation.
  4. -
-The main implication of the first guideline, is that each service can -get and hold onto references to other services, without having to -repeatedly check, whether the service references are still valid.  -This is because if a given service is to be shut-down, all services -that depend on this service will already have been shut down.  The -second guideline, simply ensures that startup and shutdown procedures -are clear and easy to follow.
-

org.eclipse.dd.dsf.service.DsfServicesTracker -vs org.osgi.util.tracker.ServiceTracker

-OSGI methods for obtaining and tracking services can be rather -complicated.  To obtain a reference to a service, the client has -to:
-
    -
  1. Get a reference to a BundleContext - object, which can be retrieved from the plugin class.
  2. -
  3. Obtain a service reference object by calling BundleContext.getServiceReference();
  4. -
  5. Obtain an instance of the service by calling BundleContext.getService(ServiceReference).
  6. -
-But worst of all, when the client is finished using the service, it has -to call BundleContext.ungetService(ServiceReference), -because the bundle context counts the used references to a given -service.  All this paperwork is useful for services which manage -their own life-cycle, and could be un-registered at any time.  To -make managing references to these kinds of services, OSGI provides a -utility class, called ServiceTracker.  -
-

For DSF services, the life cycle of the services is much more -predictable, but the process of obtaining a reference to a service is -just as onerous.  DSF provides its own utility, which is -separate from the ServiceTracker, -named DsfServicesTracker.  -The differences between the two are listed in table below:
-

- - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Property
-
OSGI - ServiceTracker
-
DSF - DsfServicesTracker
-
Number -of services tracked
-
While -not strictly limited, it is optimized for tracking services of a single -class type, or more typically to track a single service reference.
-
Designed -to track services within a single DSF session. 
-
When -are service references obtained
-
Obtain -references automatically as the services register themselves.
-
Service -references are obtained as requested by the client, and cached.
Synchronization
-
Multi-thread -accessible. 
-
Can -be accessed only on the session's dispatch thread.
-
Clean-up
-
Automatically -un-gets references for services that are shut down.
-
Client -must listen to session events, and clean up as needed.
-
-

Both trackers are useful.  Service implementations that depend -on a number of other services are most likely to use DSF ServicesTracker, while some -clients, which use a single service may find OSGI ServiceTracker more suitable.
-

-

Events

-Events are the most un-conventional component of the services package -and probably most likely to need modifications to the design by the -community.  The design goal of -the event system is to allow a hierarchy of event classes, where a -listener could register itself for a specific event class or for all -events which derive from a base class.  The use case for this -behavior is in the data model, where we would like to have the ability -to capture all model-related events with a generic listener while at -the same time allowing for services to fully use class types. 
-

The event model is made up of the following components:
-

-
    -
  • DsfServiceEventHandler annotation -- This is the only indicator that a given method is an event -listener.  The class with the event handler doesn't have to -implement any interfaces, but it must be public, which is a big -drawback.
  • -
  • Session.addServiceEventListener, - Session.removeServiceEventListener -methods - These methods allow clients to register for an event -based on an event class and a service filter, where the filter can be -used to uniquely identify a service in case of services with multiple -instances of same class.
  • -
  • Session.dispatchEvent method - -This is the method that actually dispatches the event to the -listeners.  -The method must be called by a service that generates the event.
  • -
-There are only a few more notes about the events mechanism:
-
    -
  1. The event is always dispatched in its own Runnable submitted to -the session's DsfExecutor.
  2. -
  3. There is a slight convenience for clients not to have to register -for each type of event separately.
  4. -
  5. There is a slight inconvenience for clients, because anonymous -classes cannot be used as listeners, due to the public class -requirement.
  6. -
-

Debugger Services (org.eclipse.dd.dsf.debug)
-

-DSF framework includes a set of service interfaces for a typical -debugger implementation.  Functionally, they are pretty much -equivalent to the platform debug interfaces, but they are structured in -a way that allows a debugger to implement only some of them.  In -order for the startup and shutdown process to work effectively, the -dependencies between services need to be clearly defined.  The -dependencies between the main service interfaces are shown in the graph -below:
-
-

It's also important to realize that it's unlikely that a single -hierarchy of interfaces will adequately fit all the various debugger -use cases, and it is likely that some interfaces will be needed which -partially duplicate functionality found in other interfaces.  -An example of this in the proposed interface set are the interfaces -which are used to initiate a debugging session.  The INativeProcesses service is -intended as the simple abstraction for native debuggers, where a -debugger only needs an existing host process ID or an executable image -name.  Based on this a INativeProcess -debugger implementation should be able to initiate a debugging session, -and return run-control, memory, and symbol contexts that are required -to carry out debugging operations.  By comparison, IOS and ITarget are generic interfaces -which allow clients to manage multiple target definitions, to -examine a wide array of OS objects, and to attach a debugger to a -process or some other debuggable entity. 
-
-

-

Disclaimer

-Drafting large APIs that are intended to have many implementations and -by clients is a notoriously difficult task.  It is -impossible to expect that a first draft of such interfaces will not -require changes, and only time and multiple successful implementation -can validate them.  While we can draw upon many examples of -debugger -APIs in Eclipse in and our commercial debugger, this is a new API with -a -prototype that exercises only a small portion of its interfaces.
-
-
-
-
-
-
-
-
-
-
- - diff --git a/plugins/org.eclipse.dd.doc.dsf/toc.xml b/plugins/org.eclipse.dd.doc.dsf/toc.xml index cdda79bb01c..0c117d28640 100644 --- a/plugins/org.eclipse.dd.doc.dsf/toc.xml +++ b/plugins/org.eclipse.dd.doc.dsf/toc.xml @@ -4,12 +4,14 @@ - - - - - - - + + + + + + + + +