Subject: review reports - 02C21
     Date: Mon, 12 May 2003 12:19:32 +0200
    From: roberto scopigno <roberto.scopigno@cnuce.cnr.it>
        To:  Jody Vilbrandt <jody@u-aizu.net>
       CC: David Duke <D.Duke@bath.ac.uk>
 
 

Dear prof. Vilbrandt,
 

on November 14th, 2002 you submitted to Computer Graphics Forum your paper:

Paper code: 02C21
     Authors:    C. Vilbrandt, G. Pasko, A. Pasko, J. R. Goodwin,  J. M. Goodwin and T. L. Kunii,
           Title:    Cultural Heritage Preservation Using Constructive Shape Modeling
 

As I have now received the reports of four reviewers, I am in a position to make a decision regarding the paper.

As you will see from the reports attached, three reviewers solicits a major revision (with many constructive comments) and the fourth one a minor revision.
 

I am thus unable to accept your paper for publication in its current form.
 

In case you would be interested in revising the paper in light of the reviewers' comments, I will be pleased to evaluate it again.

The revised paper should be accompanied by a detailed report setting out the changes brought to the paper to conform with the reviewers' recommendations.

This report, together with the revised paper, will be sent to the reviewers for a second review cycle.

Please do not hesitate to contact me if you have any questions.

Thank you for submitting to COMPUTER GRAPHICS FORUM.
 

Yours sincerely

Roberto Scopigno
 
 

Roberto Scopigno

Joint Chief Editor
Computer Graphics Forum

http://www.blackwellpublishers.co.uk/asp/journal.asp?ref=0167-7055
 

CNUCE - CNR
Area della Ricerca CNR di Pisa
Via G. Moruzzi 1, 56124 Pisa ITALY
phone: +39 050 315 2929     cell: +39 348 396 6819
fax:   +39 050 313 8091 (G3)  +39 050 313 8092 (G4)
email: roberto.scopigno@cnuce.cnr.it
web: http://vcg.iei.pi.cnr.it/~scopigno
 
 

   allreviews02C21.txt

                      Name: allreviews02C21.txt
                       Type: Plain Text (text/plain)
                 Encoding: quoted-printable

==================================================================
   Reviewer #1
==================================================================
Paper Number: 02C21
Author(s): C. Vilbrandt, G. Pasko, A. Pasko,
           J. R. Goodwin d), J. M. Goodwin, T. L. Kunii
Paper Title: Cultural Heritage Preservation Using Constructive
             Shape Modeling
 

1. Classification of the paper

Choose one of:
    Practice-and-experience paper (variants,applications, case studies...)

2. Does the paper address computer graphics?

Choose one of:
   Yes

3. Originality/novelty
   Good

4. Importance
   High

5. Technical soundness
   Poor

6. Clarity of writing and presentation
   Average

7. Clarity and quality of illustrations (if present)
   Good

8.  Do the ACM classification provided corresponds to the paper
topic?
   Yes

I.3.5   Computational Geometry and Object Modeling
        Boundary representations
        Constructive solid geometry (CSG)**
        Curve, surface, solid, and object representations
        Modeling packages
I.3.6   Methodology and Techniques
        Graphics data structures and data types
        Languages
        Standards
I.3.8   Applications
 

9.  Should the paper be shortened?
   Yes (see below), 15 pages -> 8 pages

10. Overall judgement
   Good/Average

11. Recommendation
   Accept, but major revisions required
 

Information for the Authors
------------------------------------------------------------------

The paper attempts to recommend and establish the author's F-Reps/HyperFun method as a representation method for Cultural Heritage content. The (IMHO valid!)  point made by the authors is that a high-level representation for CH reconstructions, replacing raw triangles, would be a gorgeous thing to have.

We agee it be a gorgeous thing to have, that's why we are working on them :-), but only if:
* such a set of digital tools were designed to meet the long term archival needs of the CH community.
* and  they were made easly avaiable under the "ownership" of the CH commiuity.  What if they existed, but limited to a few because of cost.

However they will most likey not come into existance if the CH commiuiity continues,  for lack of a better tools, to accept and support  the use of  short term propiroty "software products" created by "manfactures" that are not capable of meeting the long term archival needs of the CH community or the rigiors of independent verifcation.
 

The paper suffers though from

1. being unfocused (eg. religious arguments for open source, redundancy);

We agree with your view point and we will attempted to focuse and make our most important point concerning "digital data persistence" and the digital  archiving of CH objects.

Our point open source is in fact the most important pardim shit proposed by the paper haveing and calling upon the widest and boardest changes for the benifit of CH community.

The problems concerning the digital archiving of HC are not easly understood and currently basic problems with digital persenstans is  denyed my most and the definition of the above problems and the proposed solutions come from 20 years of personal experence resulting in  a deep understanding of both digital material and processes and their complex implementation.

We fully understant that our point about open source as a centeral part of the solution to digtial data persistance problem could be seen as a religious argument.   It  is diffculte to articulate the years of personal experence need to understand the very complex nature of digital devices and processes which is nessessary to fully understand the centeral issues srounding the use of  "open source" and its connection to DACH .  Thus we must resort to the rigiours of detailed descriptions of the digital persistence proplem steming from our personal experence, before our point can be seen as just more than a religious argument.

How is the use of computers changing us.
We use a TV and we know what a consumer cluture is.  We now use computers, but do we know what digital cluture is. If you will seek to answer basic fundmental questions about digital existance such as "what is digital data" and "how long will it last" and "why" then the view of  pardim shift concern the problems and the solutions we propose will begin to emerge.  Exacting answers to the questions are at once simple, complex and difficulte communcate, the issues overlap and reinforce each other.

The short answer - addressing "open source" core issue directly
The independent and rigirous verification of processes and data is a basic premise of acidemic and sicentific research and a nessessity before any attempt should be made to archive important historial objects.  "Open Source Code" or "Free Source" provides any one , the ability to change, complile the source, and test the complied biniary.  This is the only way that the basic requirment of "rigorous proof" can even come close to beeing satisifyed.

The long answer - addressing "open source"  is centeral underlying issue to digital achive and preserve CH.  The libary of congress is said to be able to quickly and safely store and retrive digital data for much longer periods of time than paper data, but the question is does it or will it have any value or be able to be used in a thousand years?  The machines and processes on which most current digital data depends will be long gone and a migration path for most proproity data made with propority processes is clearly not avaiable.  Support for tranlslation from one set of digital processe to another set of digital data and processes is much simular to translation between spoken and written languages.  The translations / migration of digital data from one set to another has no one to one maping or relationship.   We have safely stored 20 years of digital data and only the last seven years of data has any value.  The problem of "digital data presentance" is not being addressed.
 

 Digtial Data Persistance problem concerning the archiving of CH objects.
1.0 We clam that because of the almost infinite exponential evloutionary advancements of digital technolgies, that both because of and combined with the "virtural", dimintive and ubiquis nature of current digital materials and processes, and laws such as the DMCA that currently so called  propority "software products"  can not be used to digital archive CH objects.  Below in sections 1.1 to 1.7 attempted to define the current problems as encounerd and observed over the past twenty years.  Also below in sections 2.1 to

You are a "user" of a leased "digital process".
1.1 It is a general misconception that a you purchase and own a so called "software product" when in fact you lease the use of  very complex group of "virtural" digital processes for modifying digital data.  Further more it is a fact that you lease these digital processes for twenty years, which is very short time relitive to the lenght of time required for long term digital digital archiving.  We correcly a percicely report that the common prevasive pardim promoted by so call "software manfactures" offering leased digital processes as a "software product" is highly misleading and leads to erounious believef that you have and can maintain control of a "software product".  Nothing could be farther from the truth.  In fact the licence agreements for use of the digital processes can be revoced at the disscression of the provider for any number of reasons.

Specific hardware is now critical to the access of digital data
1.2  It is a fact many of the parts of a computer including the CPU's now have been given serial numbers and the validaity of the lease and the use of the digital processes is now base on such serial numbers.  If you have a hardware failure need to replace items such as hardrives or CPU's the digital processes of which you have only biniary copies will no longer be usable.  In the case of hardware failure the providers of the digital processes you have leased are under no obligation to replace the digital processes or if they do agree to replacement they will not be able to replace them with exact copies of the last digital processes, because the processes by most providers are being changed daily.  It is a fact that providers will fold and disappear over night

Infinite evolutional exponational change in digital techinolgies limit digital data's usefull life time.
1.3  It is a fact that the large set of complex digital processes that you lease for 10 to 20 years will become un-usable with in three to seven years because of the failure of hardware stated above or do to infinite evolutional exponational change in digital techinolgies on which current digtial processes and subsquent digital data is dependent upon.  It is a fact that it is possilbe that any change in hardware or update in the operationing systems or other digital processes on a given computer system because they are so complex could lead to part of all of your digital data becomeing in acessable and or with out useing proper backup procedures being riguslry use result in damaging digital data beyond repair.

It is impossible to to independly and regerlious verify the digial processes under current lease agreements and recent acts passed by congresse of the US.
1.4  It is a fact you can not with out having both accesses to the source code, the ability to change, complile the source, and test the complied biniary be able to independly and regerlious verify the digial processes that you have leased.   Futhrumore the DMCA makes it illegal to engauge in any activity that would independely verify the operations of the lease digital processes.  The independent and rigirous verification of digital data is the basic foundations of acidemic and sicentific research and a nessessity for the arciving of digital data.

Complexity of the processes make relibiltiy and accurace difficult to determan.
1.5  It is a fact that digital processes are extremely complex and most providers no longer create the core source code. In fact most providers do not have access to the core source code themselves and have no way to provide to their clients any rigirous proff of operations not less accuracey.  It is therefore not supprising that providers no longer specify or graentee the accurace of any of the digital processes they lease to there clients.

Intermite results and a history must be avaiable
1.6  This may be redundent, but in propritory digital processes because of the need to secure the processes use intermidet resuts and a history of operations and or processes are not made avaiable least someone could back engeer and recode the application.  In "free source" this is not such a problem, but all to often only the finial result of a serious of complex operations are retained and the processes can not be reversted.  loss of orginatnal conditions are lost. Therefore the results do not meet the basic requirment of "rigorous proof" needed in CHP work.  Short term commerical views and profit goals exclude the any condsideration of meeting the test of "rigorous proof".
 
 

The actual life time of current "propority" digital data  and processes is short and can not be accuraly determend.
1.7 It is a fact that no provider will garrentee the length of time that the digital data that is created by the digital processes they supply will be able to be accessed. No data base application provider will state in writting how long the data base you create will be able to be accessed.  Currently such as it is digital data haveing propority formats and created by closed propority digital processes is not persistent because current digial data  is inter- dependent on very complex proprietary digital technolgies that will continue to exponential advancement  and with each advancement the propority digital processes must be continualy updated and that updata may or may not be compatable with the previous processes or data.  Therefore, most digital technologies and the data created by them are not archive worthy but rather short term tools, which will last as long as the companies which created these tools. In the case of cultural heritage, the data most likely will not last as long as the artifacts it is seeking to conserve.
 
 
 
 

Solutions
2.0 We propose that the solution is both a technical and a human solution.  To sove to current digital persestance and the digital and preservation of HC objects is a overwhellming task.  However digital technologies freed from the dictates of short term commercial market aims offer a method of storing complex and multidimensional data (space, time, material properties) to conserve precious historical objects long beyond their expected lifetimes.  The development of F-rep HyperFun modeling is the technical key to the problem and F-rep developed under digital freedom, human rights and a sustainable future provisions of the GGPL agreement excuted under the conditions of global openness, that includes transparance of all transactions and accounatible is the social key to the problem. We have currently created an orgnization called "HyperFun NPO" in Japan to implementan both the technical and social solutions needed

2.1  The solutions to problems of ownership, open inspection of the process to all and cost of ownship is  "free or open source" developmnet.  Free or open source development of digital archiving technoligies  is one of the more critial issues to over come. "Free Source" gives the "user" of  both accesses to the source code, the ability to change, complile the source, and test the complied biniary.  Therefore "free source" development is basic to the practical implementation of F-rep. This is the only way that the basic requirment of "rigorous proof" of all operations and processes can be satisifyed

2.2  The solution to hardware and software data dependences is the creation of robust digital data structures that are independent of hardware or software processes.  Using only precise definition of mathical modles and a history of procedures and operations that are retained in the and embeded in the stored data is the methoid we propose and used to create our models.  We are very early stages of the development of digital data and real digital data strucutres.  We need to mention that the "Free Source" model for development base on deveopmnet and sharing of resourses has no need to specifily and artificaly limit an application to given set of hardware or software. 

2.3  The creation of robust indepndent digital data structure by the abstratction of the mathicial modles from specific hardware or software and embeding the hisory, processes and operations in to this digital data structure avoids all of the current dependency problems   The cost of creating robust digital data structures nessessary for the P of CH is an exreamly high computational cost.  However this is not a problem because of the projection that the almost inifinite exponantional evloutionary computational growth.  It is prediced that within twenty years we will have the now equivelent costing PC that will have the same storage and computational abilities as a single human brain only a thousand times faster.  Please note we did not fully commite to this work and create the first F-rep base language called HyperFun until we had witnessed and assured our selfs over prioed of six years that the nature of and that predicted exponational computational growth was actually occuring. Please also note that the growth of digital storage that is approxmaty 10 faster during this period of time than that computational growth. The F-rep based modeling is the next killer application for massive parallel processing such as the prediction of quantom computational abilities of the future.
 

2. a lack of precise technical content in the presentation.

Issue 2. is debatable: I personally think it would be of greatest interest for the community to see precisely how the great examples
have been assembled, but the paper remains an overview paper all too often. The authors talk about F-Reps and HyperFun all the
time, and of tiny file sizes, but there is not the slightest code example. Operations like 'virtual lumber cut' and 'bounded blending' are highly interesting, but only presented on a shallow level. I think there is much substance in the paper, but the presentation must be much clearer. And more modest, if I may say so. This is why I recommend a major revision.

Humm agree... I think that we are just to close to the work....  All of the operations are rather simple and now boring to us.... However exactly how some of the construction took place would be great.
 

More concrete points, following the paper:

SECTION 1. Introduction

1. "digital capture", "parametrically augment" are speculative arguments because you don't present a method to automatically create a functional rep from a scanned dataset. This is not a scanning or reconstruction paper, and it's quite difficult to replace scanned/measured data by higher-level reps.  Have a look at Hoffmann/Arinyo for info on parametric design:
        @incollection{Hoffmann2002,
          author={C. M. Hoffmann and Joan Arinyo},
          title = {Parametric Modeling},
          year = {2002},
          booktitle = {Handbook of CAGD},
          editors = {G. Farin and J. Hoschek and M.-S. Kim},
          publisher = {Elsevier},
          page = {519-542}
        }
No this paper does not use a scan and mesh approach.  However we can present a method for the automatically creation a functional rep from a scanned dataset / a cloud of points.  
 

2. Which two methods do you mean? You have two examples, but two
methods?
We manual created both modles by "manual input" but one model used CSG rep and the other use F-rep.

SECTION Measurements and modeling

3. 'Parameterization of BRep models is quite limited. Only simple
    time-dependent parameterization of BRep is allowed, which does not change
    the object topology.' - are you sure? Do you have a reference for that?
    There are morphing papers where mesh topology is changed all the time, and
    breps are also flexibly used in parametric modeling

4.  You propose to replace meshes by functionally combined
primitives???
    Later you say that primitives are too primitive.
 

SECTION Scanning

5.  computer tomography scans produce real volume data and are
used in archeology

6.  Do you really propose to archive only 'raw data'?? But for a
bigger model it might be tedious to re-do all processing always
starting from raw data, and other scenarios are also possible.
No we do not propose the archiving of only "raw data", but we do propose that the raw data must be apart
 

SECTION Scanning and meshing

7.  Isn't this section a bit redundant?
 

SECTION Scanning and modeling

8.  What is this section (scan registration through voxels) good
for in your
    argumentation? reference 5 is a bit old, there's much new stuff (radial
    basis fuctions..)
 

SECTION 2.2 Problems of cultural heritage preservation

9.  Whole section much too general, unspecific criticism. What is
it exactly
    that you complain about?
10. First sentence: WHICH first approach? 11. first two
paragraphs: This is religious, you seem to confuse 'open source'
    with 'standards'. There's much 'open source' software which no one can
    really understand, and there's software from commercial companies which
    properly implements properly defined algorithms, methods, or standards.
    Your arguments actually go for well-defined standards, or specifications,
    that's all. And VRML that you use so much is not at all properly defined
    (in terms of precision etc), it is valid if the cosmo player thinks it is.
12. "Computer models of cultural heritage sites and artifacts are
SOMETIMES
    made...."
13. Why do you mention VRML? Why don't you mention XML? 14.
"Inaccurate ..." - who decides what is appropriate for archiving??
    Would you use float or double or exact arithmetic? - Religious.
15. "Violation..." - skip this. The difference is a matrix
multiplication. 16. "Data is..." - skip this. Accuracy is a
delicate complex subject and
    devotes more attention than a naive 'not accurate enough'. 'Every
    level of detail' is a void statement, and 'easily remedied' is a joke?
17. "Digital migration problems..." This section is religious and
void. Linux
    doesn't make automatically for better software. Ansi C is as much a
    standard as Java is.
18. "On the other hand..." - that's a valid point: to stress that
computer
    science should deliver better tools to archeologists.
 

SECTION 3.1 Boundary representation and Constructive Solid
Geometry

19. face plane equations: you should start by saying that your
breps have
    planar faces. Figure 1: not very accurate. Which incidences do you store?
20. To satisfy Euler's formula is not sufficient for topological
validity. 21. What is 'wire frame'??? - Whole paragraph is dull. -
"BReps not archival
    quality" is simply false. Have a look at the Polyhedron_3<Traits> class
    from www.cgal.org.
22. What is the torus primitive good for in CH? You want to
replace meshes
    by CSG trees? - CSG tree is common knowledge, replace explanation by ref.
23. "CSG modeling can be called bi-directional." What do you
mean?? 24. "Thus, though it performs well in its representation of
most architecture,
    it would not do for sculpture." Have a look at a Ionic column: CSG is not
    enough. What do you mention IGES and STEP for, just to complain they don't
    represent CSG trees well?
 

SECTION 3.2 Function representation and the HyperFun modeling
language

25. You demand well-definedness from others, but with HyperFun you
admit that
    only 'many ops are closed on the rep'? And those that don't?
26. "The average size of HyperFun files is 5K." - void statement.
27. "It is quite easy to learn and use HyperFun on the beginner's
level." -
    that's nice, but probably the wrong kind of argument for a CH paper
28. "The open and simple.." - well, do you propose HyperFun/F-Rep
as a
    standard, or don't you? Vital question then: Does it make an archeologist's
    life easier? And if it does, in which ways exactly?
 

SECTION 4.1 Constructive modeling approach

29. 'The system could have ...' - all this is speculative. put it
in
    "Future Work". Many applications do not need a volumetric rep.
30. Paper has served well for centuries. - religious.
 

SECTION 4.2 Constructive modeling of historical buildings

31. Quite problematic section. Saying that "CSG=no loss of data"
is just wrong. 32. WHAT CSG system is it that fails so badly??? Do
you have to report on so
    bad results? It seems that CSG is somewhat of a wrong way to go here.  Why
    are your models so large? - There are MANY ways to implement CSG, some of
    which are quite lean (look for 'boolean set' on
    www.siggraph.org/publications/bibliography)
33. "the benefits..." - too general. - we know all that.
 

SECTION Golden Hall at Enichiji

34. "The first structure that we modeled using the CSG system"...
WHAT
    CSG system?
35. "measured five bays.." what is a bay? The figures 6-11 are
cool, but they
    reveal not enough detail because they are much too small. Color Section.
36. 'virtual lumber cut' sounds cool. What is it? How is it
defined?
    Would you give examples? (images + code). This could be one of the
    strongest points of the paper.
37. produce quicktime and avi.... that you can do that is CLEAR!
 

SECTION Virtual Shikki

38. "The basic mathematical representation of 3D models should
allow..." - what
    are you saying? Does or does not HyperFun/F-Rep allow this?
 

SECTION Implementation Issues

39. 'included ... the measurement of the coordinates of control
points' - now,
    what do you propose: to scan or not to scan?
40. "using the HyperFun language (see above)," - I can see no use
of the
    HyperFun language above. How about some tiny code examples?
41. Bounded blending - great. But how is the blend defined, from
the many
    possibilities known from CAGD? And would your blending op match with the
    way shikki blends are made? What is the relevance/exactness??
42. "The implementation of the three first stages..." basically
repeats the
    steps from "The Virtual Shikki project includes the following research
    and development activities:" one page earlier.
44. Your steps 1.-6. are NOT state of the art! - You must skip all
processing
    steps which involve VRML. Why do you need a separate decimator, why is
    decimation not part of the polygonizer? You're advocating open source, but
    you don't seem to know the 'gnu triangulated surface library'
    (gts.sourceforge.net) which has a fast decimator for you. Why is texure
    mapping not part of your language, as it is basically a functional process?
    Why are you so speculative about 'HyperFun as a network protocol'? This is
    a MUST if you want people to take it serious as a representation better
    than meshes. But you need to get rid of the postprocessing for that.
    - If you don't, this is an indication for HyperFun/F-Rep not being mature
    enough yet for an interdisciplinary task like CH.
 

SECTION 5. Conclusion

45. What good is HyperFun/FRep if I also need what you call a
'polygonal mesh
    or BRep' in parallel for certain tasks? And I thought that a BRep is the
    result of CSG:   CSG tree   -> evaluation -> BRep mesh
    for scanning:    pointcloud -> registration/reconstruction -> triangle mesh
    Your argumentation would be much stronger if you were saying: When you
    recreate, use HyperFun/FRep, when you scan, use trimeshes, and our future
    work is to create tools that assist in going from trimeshes to HyperFun.
46. "Perhaps the most important advantage of the FRep geometric
protocol is its
    open and simple textual format" - I don't agree. The greatest advantage is
    that a functional representation can capture object semantics instead of
    raw geometry, and that many different instances of parameterized shapes can
    be easily created. Remains to prove the efficiency of your representation
    for CH: that slightly different architectural constructions also lead to
    only slightly different HyperFun representations (or even just different
    parameters). This proof is still missing from your paper.
 

==================================================================
   Reviewer #2
==================================================================
Paper Number: 02C21
Author(s):    Vilbrandt, G. Pasko, A. Pasko, J. R. Goodwin,
              J. M. Goodwin and T. L. Kunii,
Paper Title:  Cultural Heritage Preservation Using Constructive
              Shape Modeling
 

1. Classification of the paper
     Practice-and-experience paper (variants,applications, case studies...)

2. Does the paper address computer graphics?
    Yes

3. Originality/novelty
    High

4. Importance
    Good

5. Technical soundness
    High

6. Clarity of writing and presentation
    Average

7. Clarity and quality of illustrations (if present)
    Average

8.  Do the ACM classification provided corresponds to the paper topic?

    NO classification given

9.  Should the paper be shortened?
    Yes

10. Overall judgement
    Average
 

11. Recommendation
    Accept, but major revisions required
 

Information for the Authors
---------------------------

Paragraphs 1 and 2 are too vague and often not well defined: they therefore
should be summarized and better focused.
 

Alike, paragraph 4, section 4.1 takes up again issues previously described,
which for this reason should not be presented once more as an autonomous
paragraph.
 

Sections 4.2 and 4.3 are too long and could be summarized.
 

The paper presents a number of excellent ideas concerning the application of
F-rep operators within historical architecture and cultural heritage.
Nevertheless, the results are contradictory (a conclusion also pointed by the
very same authors in section 4.2); modeling techniques are often
inappropriate; sometimes the real advantage is not clear, comparing with
traditional commercial tools (using which it's possible to obtain results
even with very low cost software); also models and visualizations are often
unacceptable compared to the proposed application field.
 

The idea of modeling architecture using CSG is not applicable, because the
building construction moves through all-in-one, usually presenting complex
and non-homegeneous shapes of simple and often discrete pieces, instead of
element aggregation obtained applying boolean operators on simple solids
(the authors actually could not produce any models with this technique).
 

Furthermore, an ancient building is more typically a ruined object, made of
both free-form surfaces and a number of considerable solid primitives
(cylinders, parallelepipeds, spheres, etc.).
 

A revision of the paper would be therefore necessary, which should:

- distinguish between obtainable and obtained results;

- limit the case record in the application field to cases of historical
reconstruction rather than representation of real world building;

- give explanations of the applicability of NURBS or Bèzier operators for
such cases (actually already implemented in Hyperfun libraries);

- investigate the case of the application to thematic analysis (statical,
thermo-technical, etc.)
 

==================================================================
   Reviewer #3
==================================================================
 Paper Number: 02C21
 Author(s): C. Vilbrandt, G. Pasko, A. Pasko, J. R. Goodwin, J. M. Goodwin
           and T. L. Kunii
 Paper Title: Cultural Heritage Preservation Using Constructive Shape Modeling
 

 1. Classification of the paper

      Practice-and-experience paper (variants,applications, case studies...)

 2. Does the paper address computer graphics?
     Yes

 3. Originality/novelty

     Average

 4. Importance

     Poor

 5. Technical soundness

     Average

 6. Clarity of writing and presentation

     Good

 7. Clarity and quality of illustrations (if present)

     Poor

 8.  Do the ACM classification provided corresponds to the paper topic?
      (all papers should be classified using the ACM Computing
       Classification System - ACM CCS, found at http://www.acm.org/class )

     No

 9.  Should the paper be shortened?

     Yes

 10. Overall judgement

     Average

 11. Recommendation

     Major revisions and re-refereeing required
 
 

 Information for the Authors
 ---------------------------

This is a potentially interesting paper, but little detail is included
on the actual approach used, nor is there adequate justification for
the use of this approach.

One of the hardest parts of CSG is knowing how to combine the
primitives to acquire the desired shape. You should provide far more
detail on how you achieved the shapes you show in your figures.

The main criticism of this paper is the lack of justification for the
use of CSG. In fact in section 4.2 you even mention that you had to
revert to a polygonal mesh to represent part of your model.

The benefits you list for your approach, that is rendering parts of
the scene and not others, can equally apply to any modelling approach
which separates the scene into appropriate objects. One of the
problems with CSG has always being high quality rendering of the
resultant models - essential for high quality walkthroughs etc. Was
this a problem for your renderings of figures 11d and 11e?

A key point you emphasize is the average size of the HyperFun models
were much less than those for VRML. You must supply some measure of
the difference in quality between the resultant images. It is no point
if the models in HyperFun are a smaller size but the quality is
severely compromised. You may want to consider comparing images of the
resultant models using a perceptual difference approach such as
Myszkowski's VDP:

MYSZKOWSKI, K. 1998. The Visible Differences Predictor: Applications
to global illumination problems. In Proceedings of the 1998
Eurographics Workshop on Rendering Techniques, G. Drettakis and
N. Max, Eds. 223-236.
 

Minor point: I have always seen CSG representation for subtraction as
"-" not "/"? Your symbols for union and intersection did not come out
in my printing of the paper - were you using a non standard font?

==================================================================
   Reviewer #4
==================================================================
Paper Number: 02C21 Author(s):    C. Vilbrandt, G. Pasko, A.
Pasko, J.R. Goodwin, J.M. Goodwin, T.L. Kunii Paper Title:
Cultural Heritage Preservation Using Constructive Shape Modeling
 

1. Classification of the paper

Choose one of:
     Research paper (presents innovative research results)
     Technical note (short paper, focuses on a single technical issue)
x    Practice-and-experience paper (variants,applications, case
studies...)
     State-of-the-art report (reviews recent advances)
     Other (please specify)

2. Does the paper address computer graphics?

Choose one of: x   Yes
    No
    Marginal

3. Originality/novelty

Choose one of:
    High
x   Good
    Average
    Poor
    Low

4. Importance

Choose one of:
    High
x   Good
    Average
    Poor
    Low

5. Technical soundness

Choose one of:
    High
    Good
x   Average
    Poor
    Low

6. Clarity of writing and presentation

Choose one of:
    High
x   Good
    Average
    Poor
    Low

7. Clarity and quality of illustrations (if present)

Choose one of:
    Not applicable
    High
x   Good
    Average
    Poor
    Low

8.  Do the ACM classification provided corresponds to the paper
topic?
     (all papers should be classified using the ACM Computing
      Classification System - ACM CCS, found at http://www.acm.org/class )

Choose one of:
    Yes
x   No (none given)

If NO (or if the author has not indicated it), please specify
alternative ACM classification:

  I.3.5 Curve, surface, solid, and object representations
 

9.  Should the paper be shortened?

If you recommend shortening the paper, please indicate in the
`Information for Authors' section where the paper should be
shortened, or mark-up and return the paper.

Choose one of: x   Yes
    No
 

10. Overall judgement

Choose one of:
    High
x   Good
    Average
    Poor
    Low
 

11. Recommendation

Choose one of:
    Accept
x   Accept after minor revision
    Accept, but major revisions required
    Major revisions and re-refereeing required
    Reject
 

Information for the Authors
---------------------------

Although existing techniques (BRep, CSG, and the authors' FRep)
are used for the proposed approach, the idea of enforcing open
data formats for long-term archiving purposes is an intersting and
(at least in this context) new one.

The authors dedicate a single paragraph to the fact that modeling
with FRep is extremely time-consuming. However, to be a useful
archiving tool, not only the long-term storage time is important,
but also the time to create the archive from source data. The
paper lacks a report of actual experience with users of FRep
(training time, modeling time). For similar reasons I do also not
agree with the statement that "HyperFun provides a high level of
compression" since a compression technique is of limited
usefulness if it is not fully automatic.

The discussion of BRep and CSG in Section 3 is common knowledge in
computer graphics and is too detailed.