Subject: Re: Answers to reviews 2
          ----------------------------------------------

  REVIEWER No.1
----------------------------------------------
 Paper Number: 02C21
 Author(s):    C. Vilbrandt, G. Pasko, A. Pasko, P.-A. Fayolle,
              T. Vilbrandt,  J. R. Goodwin, J. M. Goodwin, T. L. Kunii
 Paper Title:  Cultural Heritage Preservation Using Constructive
               Shape Modeling
 
1. Classification of the paper
Choose one of:
    Practice-and-experience paper (variants,applications, case studies...)
2. Does the paper address computer graphics?
Choose one of:
   Yes
3. Originality/novelty
Choose one of:
   Good
4. Importance
Choose one of:
   High
5. Technical soundness
Choose one of:
   Good
6. Clarity of writing and presentation
Choose one of:
   High
7. Clarity and quality of illustrations (if present)
Choose one of:
   Good  (some of them (Fig. 10) could be bigger but space is limited)
8.  Do the ACM classification provided corresponds to the paper
topic?
   Yes
I.3.5   Computational Geometry and Object Modeling
        Boundary representations
        Constructive solid geometry (CSG)**
        Curve, surface, solid, and object representations
        Modeling packages
I.3.6   Methodology and Techniques
        Graphics data structures and data types
        Languages
        Standards
I.3.8   Applications
 
9.  Should the paper be shortened?
Choose one of:
   Yes (introductory part)
10. Overall judgment
Choose one of:
   High
 
11. Recommendation
Choose one of:
   Accept
 
Information for the Authors
---------------------------
 
The paper has been diligently rewritten, the new version is indeed
a major revision. Most of my prior remarks are now obsolete.
The problem that remains might be the length of the paper. It
tries to cover a very broad thematic range, although one must say
that it's well balanced:
 - General Applicability of Digital Preservation
   in the domain of Cultural Heritage, different methods and their problems
 - The HyperFun/FRep approach
 - Case study with three examples
 - Fitting of scanned data to a functional representation
The first introductory sections could probably be shortened. The
paper should be proof-read by a native speaker ("On the other
hand, STEP protocol supports CSG" etc).
On the other hand, STEP protocol ... does support CSG,
Considering my (limited) experience with real archaeologists, I
have my doubts if they will be as enthusiastic about the validity
and the usability of the HyperFun/FRep approach as the authors
are. (Which doesn't mean there should be no such research, of
course)
A few concrete remarks.
1. Introduction: "Unfortunately, ... will become unusable " - are you sure?
    Proposal: "... will probably become unusable..."
Original:
Unfortunately, much of the current digital modeling, visualization and animation of cultural heritage objects will become unusable before the existing heritage objects themselves are destroyed or lost. Thus, we discuss the technical problems concerning digital persistence as related to the creation of system independent digital data structures. We also argue the benefits of using open standards and procedures whenever possible as critical to the archiving of data.
 
Added the following sentence and supporting reference. Also attempted to shorten the sentence as requested above by RV1.
"Rapid changes in the means of recording information, in the formats for storage, and in the technologies for use threaten to render the life of information in the digital age as, ..... , 'nasty, brutish and short'." [2]  Much of the current digital modeling, visualization and animation of cultural heritage objects will become unusable before the existing heritage objects themselves are destroyed or lost. Thus, we discuss the technical problems concerning digital persistence as related to the creation of system independent digital data structures. We also argue the use of open standards and procedures as critical to the archiving of data.
[2] added to Introduction and Section 2.4  http://www.rlg.org/ArchTF/
"Preserving Digital Information, Final Report and Recommendations of the Task Force on Archiving of Digital Information", Commission on Preservation and Access and The Research Libraries Group, Inc. , May 1996.

Original:
More than ten years ago, concerned with digital persistence and lacking any robust tools to choose from which met open standards, the authors made careful selection of Constructive Solid Geometry (CSG) as a model for culturally valuable shapes. The authors, aware of the pending probable obsolescence of the supporting tools and data structures, planned for migration of the CSG cultural heritage modeling to more abstract, robust and system independent data structures as described in this paper.
Changes by the authors:
More than ten years ago, concerned with digital persistence, the authors used Constructive Solid Geometry (CSG), a set of commonly  known and understood procedures, for modeling culturally valuable shapes. The authors, aware of the pending obsolescence of the system supporting the CSG modeling, planned for the lossless migration of the CSG cultural heritage modeling to a more abstract, robust and independent system of representation as described in this paper.
 2. A "... reflects the logical structure ..." - better maybe: semantic
    structure. Logic is debatable, in most cases there's more than one 'logical'
    way to (re-)construct something.
A right handed nut does not fit a left handed stud. Similarly, in construction processes, there is a very limited number of choices dictated by local environment and culture. In the case of our Japanese temples, the Japanese culture in particular dictates that there is only one proper form or kata for a given task.  Everything is planned before execution. Japanese temple structures include unique solutions to earthquakes, snow loading, rain, typhoon, interpretation of classical design imported from China. Moreover, they empirically reflect the historical development of construction techniques due to the orderly progression of craft passed from master to apprentice to master. Traditional Japanese buildings are/were cut and joined similar to a prefabrication process.  So, there is a specific way that each building goes together. By "logical" we mean to convey "true" or "valid" as opposed to an arbitrarily constructed "beautiful picture" as discussed in B below.

"The construction of 3D computer reconstructions and Virtual Reality environments should be based upon a detailed and systematic analysis of the remains, not only from archaeological and historical data but also from close analysis of the building materials, structural engineering criteria and architectural aspects. Together with written sources and iconography, several hypotheses should be checked against the results and data, and 3D models ‘iterated’ toward the most probable reconstructions. All aspects of the site’s interpretation should be integrated."  The ENAME Charter, International Guidelines for Authenticity, Intellectual Integrity, and Sustainable Development in the Public Presentation of Archaeological and Historical Sites and Landscapes, Draft 2 (17 October 2002), Scientific and Professional Guidelines (B, C),  Articles 18 and 20.
http://www.heritage.umd.edu/CHRSWeb/Belgium/Proposed%20Charter.htm  [3] added to Introduction

     B - This is also one problem archaeologists
     have with virtual reconstructions, namely that sound knowledge and
     scientific hypotheses are very often not clearly separated in beautiful
     pictures or CG animations of virtual CH reconstructions.
This is not a virtual reconstruction which is typically single precision math, but a conservation model using double precision math to create digital preservation objects.  In the museum conservation approach, our scientific hypotheses are very clearly separated from "beautiful pictures or animations" by abstracted mathematical modeling. Many current virtual constructions are polygons with images pasted on the surfaces in a way that makes them look good. We have great trouble with the fact that these virtual constructions, in part because of current modeling procedures, do not have the precision to carry sound knowledge or a scientific base.
 3.  in 2.1: "in very small amounts of _physical_ space, ..."  done
in very small amounts of  physical space, large amounts of data can be stored,
 4. in 2.2: "- Archiving digital representations of raw data, and of reconstr..."
 5. in 2.3: You present a classification from "Measurements and drafting" to
     "Volumetric scanning and modeling"; do you have references/concrete
     examples showing all classes are relevant?
These are not intended to be a formal classification, but an overview of current techniques in digital modeling of culturally valuable objects.
           6. in 2.4, Solutions:  A Your favorite 'open standards and procedures' issue still waives with 'open source'.
B And I would still hold against it that a good (i.e. precise) standard is really sufficient, and that the
implementation of a specific computer program is really irrelevant. C As long as there is a document
 that precisely describes the format of a given piece of data, it will always be possible to make use
of these data, and thus the data cannot get lost.

1st para of Section 2.4 rewritten and
The report,
"It's About Time: Research Challenges in Digital Archiving and Long-term Preservation," April 2002, sponsored by the National Science Foundation and the Library of Congress --  [13] added to Section 2.4 http://www.digitalpreservation.gov/index.php?nav=3&subnav=11
http://www.digitalpreservation.gov/repor/NSF_LC_Final_Report.pdf , empahsizes the scope of the problems associated with digital preservation and that current digital preservation methodologies are unsatisfactory, calling for new social and business structures for the support of digital public goods.

6. in 2.4, Solutions:  A Your favorite 'open standards and procedures' issue still waives with 'open source'.
The nature of digital materials is initiating fundamental changes in our social structures. The following is a reference to an overview of the open source phenomenon; though the verbage - open source -  may be new, the idea is not. http://www.wired.com/wired/archive/11.11/opensource.html
One can not ignore the fact that user data is dependent on all of the attending digital processes and physical devices that are used to create it. Proprietary operating systems and device drivers which control the physical devices are not purchased by the end user, but are leased for a short period of time. When the lease is up, the user no longer has access to the data he has created.

6. in 2.4, Solutions: B And I would still hold against it that a good (i.e. precise) standard is really sufficient,
        and that the implementation of a specific computer program is really irrelevant.
6. in 2.4, Solutions: C As long as there is a document  that precisely describes the format of a given piece
       of data, it will always be possible to make use of these data, and thus the data cannot get lost.


The above reasoning might be correct if  "the format of a given piece of data" ( user data ) contained all the data needed to make use of the data.  Most user data is not self contained, because it contains empty data which in some way references resource data that is not contained within the user data itself.   User data is the result of user input into a given type of computer, Sun, Mac, IBM, Compaq/Dec; with a given OS for use with different CPUs and levels of CPUs; with different application programs that are written to work on the different systems and within a given technology time period of three to seven years. The user data can only be accessed within that technology time period by the same system or a similar system that meets the specific hardware and software specifications containing the same libraries and other resources referenced by the application program to create the user data on the original system at the time of creation.  Typically,  libraries and other data resources needed to make use of user data are not contained in the user data, but only referenced by name or by pointing to a location or an address in memory. Furthermore, this resource data is not defined or contained within the program that created the data but exists as a supporting data file.

An example of user data dependency is as follows:   Some architects using AutoCAD downloaded a free font library that did not come with the AutoCAD program and the supporting data files that allowed them to use a font type that looked like hand drawn letters.  Many drawings were made using this very beautiful font.   However, the user data,  the CAD drawings, did not contain the the font definitions and supporting files but only the name and the location of the font in the system that created it.  So, to make use of the user data on a different system of the same specifications, that system also had to have the supporting font data in the same directory with the same names as the system used to create it. Some architects renamed the fonts, modified the fonts or created their own private fonts to a standard AutoCAD font name and put them in the same font directory that the AutoCAD program used.  The users who renamed the font to a standard AutoCAD name used this as a protection against the use of their work by other architects.  Their drawings would load on a different system, but the labels and other text using the modified fonts would be so rearranged that the drawings could not be used.
AutoCAD itself changed their so called standard fonts and font data, causing users tremendous rework when upgrading their AutoCAD program.  Some CAD consultants  invested time loading the old font definitions back in for their AutoCAD clients to avoid the cost of rework.

A document that precisely describes the format of a given piece of data does not prevent the loss of the data, because it does not contain the definition of the data.
As in the case above, user data dependencies are the rule not the exception, because the amount of storage needed to contain all of the data dependencies within the user data file is not practical. However, it can be argued  the definition of  "a document  that precisely describes the format of a given piece of data" can be expanded to include the encapsulation of all libraries and other data dependencies so that  "it will always be possible to make use of these data, and thus the data cannot get lost."  We could call such an encapsulation a digital object. Assuming that only a small percentage of non standard data dependencies which result in either the damage or loss of dependent data exist, this would seem to be a workable solution.

Unfortunately, this does not solve the digital persistence problem, because digital objects (as defined above) are dependent on many different  processes within the application programs, which
in turn are dependent upon a given operating system (OS). In turn, the OS is dependent upon a great number of both large and small hardware devices that are each dependent in their specific associated tasks on device drivers. The device drivers are based on assembly language programs that control each device by using an underlying system of micro architecture of each device made of modular units that are used in various combinations to extend the use of hardware resources. Accordingly, a digital object could be expanded to include the emulation of the above dependencies to extend the usefulness of the digital object beyond its technological time period. However, looking at the level of complexity of digital systems where most current hardware elements (numbering in the hundred of thousands), operating systems (containing 28 million lines of code), and the applications with their associated processes (unknown number of lines of code), a person would begin to understand the problems associated with emulation as a solution for digital persistence.

It is also noteworthy that all of the above mentioned system layers have unexplained system features (sometimes called "bugs"  when they are not repeatable).  So, digital objects to be preserved must contain all the above good (i.e. precise) standards that describe complex systems and unexplained system features on which the data was conceived. Accordingly, a digital object could be defined as (data + meta data +...... +nary + meta nary), but this definition does not fully take into account that it is statistically improbable that any generalized computer system will ever process the same data in exactly the same way and that system emulation of unexplained system features may not be possible or cost effective.

The idea of  the creation of a digital object as a solution to digital preservation runs into legal problems as well.  The Digital Millennium Copyright Act makes it illegal to disclose propriety technologies by either the use of meta data to precisely describe a propriety user data format or by the emulation of propriety data processes or devices.  Secret protocols and operations of propriety systems would require independent legal agreements of thousands of different copyright and patent holders. It is not at all unreasonable to assume that, even if one could obtain the legal permissions to create a precise standard for a digital object, one could not because of the sheer volume of information and the probability that some of the information is not available.
 
We suggest in this paper that digital objects made from transient and imprecise proprietary technologies are unsuitable for digital archiving.  Accordingly, this paper advocates the avoidance of the technical and legal problems mentioned above by the creation of digital preservation objects from independent, persistent digital data structures that include the processes, operations, and history of the processes that made the data. (Critical computer systems made of specialized hardware is not discussed and beyond the scope of this paper.)

References on the problems related to the DMCA:
Assn. for Computing Machinery:    
[14] added to Section 2.4
http://lcweb.loc.gov/copyright/1201/comments/171.pdf (PDF file)
Computer & Communications Industry Assn.:
http://lcweb.loc.gov/copyright/1201/comments/224.pdf (PDF file)
MIT Media Lab:
http://lcweb.loc.gov/copyright/1201/comments/185.pdf (PDF file)
Library of Congress (National Digital Library Program, and the Motion Picture, Broadcasting and Recorded Sound Div.):    
[15] added to Section 2.4

http://lcweb.loc.gov/copyright/1201/comments/175.pdf (PDF file)
(Yes, even the Library of Congress itself criticizes the DMCA!)
Princeton University:
http://lcweb.loc.gov/copyright/1201/comments/235.pdf (PDF file)
Assn. of American Universities, American Council on Education, and Natl. Assn. of State Universities:
http://lcweb.loc.gov/copyright/1201/comments/161.pdf (PDF file)
American Library Assn., American Assn. of Law Libraries, Assn. of Research Libraries, Medical Library Assn., and Special Libraries Assn.: http://lcweb.loc.gov/copyright/1201/comments/162.pdf (PDF file)

7. in 2.4, Solutions: Considering standards in the Cultural Heritage area, you
      might want to refer to CIDOC, the "International Committee for
      Documentation of the International Council of Museums", have a look at
      www.cidoc.icom.org, to see how many standards are on their way with regard  
      to CH. Also valuable: The virtual heritage network www.virtualheritage.net

As a member of the Digital Preservation Sub-Committee CIDOC, one of the authors  is writing a position paper on digital preservation standards and, having studied the current standards related to digital preservation at length and as a result of those studies, found that current digital preservation standards including the established work flows and methods for the collection of digital data do not take into account the current data dependencies or the complex nature of digital systems. Furthermore, these methods for digital preservation are untested and unproven. What this paper proposes is the synthetic simulation of objects using precise mathematical modeling techniques in contrast to the current standards based on text / meta tags.  The use of text based preservation methods is not sufficient because of the issues of precision and change in language over time.

8. in 3.1, "Mathematical cracks": I don't agree. There are (many) bad polygonal
    models (or BReps as you say), but this is not a problem of the
    representation as such, but of the tools used to create the models. Isn't
    it also possible to create bad functional models? (Singularities, instable
    or ill conditioned functions)
It is not the problem of the polygon tools, but the polygon representation that allows manipulations that are not mathematically closed. Bad function models can not be created. Singularities can not exist, and instable or ill conditioned functions can not exist without mathematical discovery and remediation; that is the nature of FRep and the use of a CSG tree or history. Polygonal representation has no history or structural volume... Singularities abound in this representation.
9. in 3.3, Extensible operations: "Many modeling operations are closed..."
    Are there modeling operations which are not? Clarify!
    "without changing its integrity" - adding a buggy function DOES affect
     the integrity, doesn't it?
Yes, there are  modeling operations that are not closed; these are rigid body motions which do not effect mathematical integrity of a structure. If one makes a buggy program that uses FRep, one will know it because there will be no mathematical closure.
          10. in 3.4, Lightweight protocol: "The average size of HyperFun files is 5K" -
                still a void statement, or is there a hard-coded 5K limitation? - drop it
                later in 4.4 Web presentation you're repeating that statement anyways
3.4, Lightweight protocol has been rewritten and an example figure inserted.

11. in 3.5, 3,  What is 'transparent learning'?
Short answer:  empirical learning without the learner being aware of the learning acquisition process.
example:  driving a car
Long answer with regard to transparent learning in a digital context:  sustained, long term, intrinsic personal growth obtained through conceptual layering of an immersive, synthetic environment where a person's learning is independent from a given subject and indirectly gained through the virtual experience. Associative thinking is invoked in an event driven environment structured to the improvement in the ability to think in multi-converging determinants and in the application of logical procedures and processes where the content is secondary to the task of stimulating growth in the cortex region of the brain. This kind of emergent learning is digitally based on detailed simulations of complex interactive environments.
example:  online computer games
12. in 3.5, Hybrid system: You propose the BRep for vertex editing.
But the BRep is the output of your polygonizer right?
Yes that is correct.
So how would you feed back the changed BRep to the HyperFun description it was generated from?

The polygonal BRep is both a visual and a dimensional representation of the FRep. If you change a vertex of the polygon which represents the FRep object, then a corresponding FRep object representing the difference is created and can be added or subtracted from the model. If the vertex change is within the limits of the primitive structure's integrity, then the primitive's definition ma
y be changed rather than adding or subtracting a difference object.
Is that possible anyways?
Maybe it is and maybe it is not... editing can be done and we will soon find the limitations.

You're also saying that the BRep provides more fine-grained control than HyperFun/FRep models;

No, we are not saying that at all.  It is a GUI to the FRep.  Just as a raster file of points is a GUI for the modification of a higher grained vector file using object snaps (lookup table of more fine grained data), the low res or course grain visual representation of BRep gives user access to the fine grain or high res data of FRep.

isn't that a draw-back that greatly limits the applicability of your approach?

No more that an imprecise raster image limits the application and manipulations of precision vector data.

Is there an analogy to Euler operators for implicit solids?

Euler operators try to provide mathematical closure for polygonal surfaces. FRep has mathematical closure.

13. in 4.5, Local/Global methods: Is it possible to derive a Taylor series for an FRep model analytically?
      Or do you resort to a numerical approximation of  the derivative?

It is not analytical.  It is numerical or an automatic difference.

----------------------------------------------
  REVIEWER No.2
----------------------------------------------
 Paper Number: 02C21
 Author(s):    C. Vilbrandt, G. Pasko, A. Pasko, P.-A. Fayolle,
              T. Vilbrandt,  J. R. Goodwin, J. M. Goodwin, T. L. Kunii
 Paper Title:  Cultural Heritage Preservation Using Constructive Shape Modeling
3. Originality/novelty    High
4. Importance  Good
5. Technical soundness            Good
6. Clarity of writing and presentation    Good
 
7. Clarity and quality of illustrations (if present) Average
 
8.  Do the ACM classification provided corresponds to the paper
topic?
    (all papers should be classified using the ACM Computing
     Classification System - ACM CCS, found at http://www.acm.org/class )
   Yes
9.  Should the paper be shortened?
   No
10. Overall judgment
   Good
11. Recommendation
   Accept
----------------------------------------------
  REVIEWER No.3
----------------------------------------------
 Paper Number: 02C21
 Author(s):    C. Vilbrandt, G. Pasko, A. Pasko, P.-A. Fayolle,
              T. Vilbrandt,  J. R. Goodwin, J. M. Goodwin, T. L. Kunii
 Paper Title: Cultural Heritage Preservation Using Constructive Shape Modeling
1. Classification of the paper
    Practice-and-experience paper (variants,applications, case studies...)
2. Does the paper address computer graphics?
    Yes
3. Originality/novelty
    Good
4. Importance
    Good
5. Technical soundness
    Average
6. Clarity of writing and presentation
    Good
7. Clarity and quality of illustrations (if present)
    Good
8.  Do the ACM classification provided corresponds to the paper topic?
     (all papers should be classified using the ACM Computing
      Classification System - ACM CCS, found at http://www.acm.org/class )
   No (none given)
If NO (or if the author has not indicated it), please specify
alternative ACM classification:
  I.3.5 Curve, surface, solid, and object representations
.............................................................
 
9.  Should the paper be shortened?
    No
10. Overall judgment
    Good
11. Recommendation
    Accept after minor revision
 
Information for the Authors
===========================
 
The new section on fitting a parametrized FRep model to a point cloud
demonstrates a semi-automatic method to process scanned 3D points, which is
important for the proposed data structure to be useful as an archiving tool.
A few comments remain:
1.) The authors state that exact duplicates can be created from digital models.
This is not true, the model creation process suffers from three different kinds
of error (sampling, discretization, and quantization error, see "Progressive
Geometry Compression", Khodakovsky/Schröder/Sweldens, SIGGRAPH 2000),
and reproduction of the model is also subject to mechanical deviations. So there
are at least four independent sources of errors, making an *exact* duplicate
impossible. A more valid statement would be that the total error can be
(arbitrarily?) reduced by increasing technical efforts, but even this is
debatable.
The authors erred in clear writing. We do not mean exact physical duplicates of cultural heritage objects but exact digital duplicates of digital models of cultural heritage objects.
This allows for any number of exact digital duplicates of digitally modeled objects to be stored in various locations, thus providing both public access and security; such is not possible with physical objects.
2.) The authors mention the necessity of a stable storage medium at several
places. Could you give an example of what you consider "as stable as e.g.
Egyptian pyramids", i.e., providing secure storage for thousands of years?
Current mass storage media (magnetic and optical) are clearly out of question
for this purpose.
The words "stable" and "medium" do not appear in the paper, two references to "storage" are in regard to compression and one to changes in storage formats. The issue of secure and stable storage for digital preservation is currently addressed by the automatic refreshment of magnetic and optical data. Write-once read-many-times (WORM) systems are being researched as indirect physical authority for authenticity. Possible containment devices might be based on PEDOT, Sven Moller, et al., "A polymer/semiconductor write-once read-many-times memory," Nature 426, 166 - 169 (13 November 2003); doi:10.1038 / nature02070.  http://www.nature.com/cgi-taf/DynaPage.taf?file=/nature/journal/v426/n6963/abs/nature02070_fs.html
In the digital view, the issues of stability are answered by system redundancy and dynamic operations. Certainly a stable storage medium is an important topic for digital preservation but beyond the scope of this paper and a subject for future research.
3.) The authors are still not very specific on file sizes. The average FRep
file size statement in Section 3.4 is confusing, the same is true for the
average VRML file size in Section 4.4. A table comparing the file sizes (VRML,
FRep) of all models discussed in the paper would be helpful.

3.4, Lightweight protocol has been rewritten and an example figure inserted.
The average VRML file size in Section 4.4  has been deleted since VRMl file sizes are quite arbitrary depending on the complexity of the model.
FRep allows a high degree of compression while still accurately describing an object. However, there is a high degree of computational processing needed to uncompress to a viewable VRML file or other viewable format.  By comparison, VRML is not a precise, mathematical definiton of an object nor does it offer the same level of compression as FRep.
4.) A formatting remark: Refs. 30 and 31 (last paragraph of Section 2.3) appear
out of order.
 
Corrected.