Subject: Re: Answers to reviews 2
----------------------------------------------
REVIEWER No.1
----------------------------------------------
Paper Number: 02C21
Author(s): C. Vilbrandt, G. Pasko, A. Pasko, P.-A. Fayolle,
T. Vilbrandt, J. R. Goodwin, J. M. Goodwin, T. L. Kunii
Paper Title: Cultural Heritage Preservation Using Constructive
Shape Modeling
1. Classification of the paper
Choose one of:
Practice-and-experience paper (variants,applications, case studies...)
2. Does the paper address computer graphics?
Choose one of:
Yes
3. Originality/novelty
Choose one of:
Good
4. Importance
Choose one of:
High
5. Technical soundness
Choose one of:
Good
6. Clarity of writing and presentation
Choose one of:
High
7. Clarity and quality of illustrations (if present)
Choose one of:
Good (some of them (Fig. 10) could be bigger but space is limited)
8. Do the ACM classification provided corresponds to the paper
topic?
Yes
I.3.5 Computational Geometry and Object Modeling
Boundary representations
Constructive solid geometry (CSG)**
Curve, surface, solid, and object representations
Modeling packages
I.3.6 Methodology and Techniques
Graphics data structures and data types
Languages
Standards
I.3.8 Applications
9. Should the paper be shortened?
Choose one of:
Yes (introductory part)
10. Overall judgment
Choose one of:
High
11. Recommendation
Choose one of:
Accept
Information for the Authors
---------------------------
The paper has been diligently rewritten, the new version is indeed
a major revision. Most of my prior remarks are now obsolete.
The problem that remains might be the length of the paper. It
tries to cover a very broad thematic range, although one must say
that it's well balanced:
- General Applicability of Digital Preservation
in the domain of Cultural Heritage, different methods and their problems
- The HyperFun/FRep approach
- Case study with three examples
- Fitting of scanned data to a functional representation
The first introductory sections could probably be shortened. The
paper should be proof-read by a native speaker ("On the other
hand, STEP protocol supports CSG" etc).
On the other hand, STEP protocol ... does support CSG,
Considering my (limited) experience with real archaeologists, I
have my doubts if they will be as enthusiastic about the validity
and the usability of the HyperFun/FRep approach as the authors
are. (Which doesn't mean there should be no such research, of
course)
A few concrete remarks.
1. Introduction: "Unfortunately, ... will become unusable " - are you sure?
Proposal: "... will probably become unusable..."
Original:
Unfortunately, much of the current digital modeling, visualization and animation of cultural heritage objects will become unusable before the existing heritage objects themselves are destroyed or lost. Thus, we discuss the technical problems concerning digital persistence as related to the creation of system independent digital data structures. We also argue the benefits of using open standards and procedures whenever possible as critical to the archiving of data.
Added the following sentence and supporting reference. Also attempted to shorten the sentence as requested above by RV1.
"Rapid changes in the means of recording information, in the formats for storage, and in the technologies for use threaten to render the life of information in the digital age as, ..... , 'nasty, brutish and short'." [2] Much of the current digital modeling, visualization and animation of cultural heritage objects will become unusable before the existing heritage objects themselves are destroyed or lost. Thus, we discuss the technical problems concerning digital persistence as related to the creation of system independent digital data structures. We also argue the use of open standards and procedures as critical to the archiving of data.
[2] added to Introduction and Section 2.4 http://www.rlg.org/ArchTF/
"Preserving Digital Information, Final Report and Recommendations of the Task Force on Archiving of Digital Information", Commission on Preservation and Access and The Research Libraries Group, Inc. , May 1996.
Original:
More than ten years ago, concerned with digital persistence and lacking any robust tools to choose from which met open standards, the authors made careful selection of Constructive Solid Geometry (CSG) as a model for culturally valuable shapes. The authors, aware of the pending probable obsolescence of the supporting tools and data structures, planned for migration of the CSG cultural heritage modeling to more abstract, robust and system independent data structures as described in this paper.
Changes by the authors:
More than ten years ago, concerned with digital persistence, the authors used Constructive Solid Geometry (CSG), a set of commonly known and understood procedures, for modeling culturally valuable shapes. The authors, aware of the pending obsolescence of the system supporting the CSG modeling, planned for the lossless migration of the CSG cultural heritage modeling to a more abstract, robust and independent system of representation as described in this paper.
2. A "... reflects the logical structure ..." - better maybe: semantic
structure. Logic is debatable, in most cases there's more than one 'logical'
way to (re-)construct something.
A right handed nut does not fit a left handed stud. Similarly, in construction processes, there is a very limited number of choices dictated by local environment and culture. In the case of our Japanese temples, the Japanese culture in particular dictates that there is only one proper form or kata for a given task. Everything is planned before execution. Japanese temple structures include unique solutions to earthquakes, snow loading, rain, typhoon, interpretation of classical design imported from China. Moreover, they empirically reflect the historical development of construction techniques due to the orderly progression of craft passed from master to apprentice to master. Traditional Japanese buildings are/were cut and joined similar to a prefabrication process. So, there is a specific way that each building goes together. By "logical" we mean to convey "true" or "valid" as opposed to an arbitrarily constructed "beautiful picture" as discussed in B below.
"The construction of 3D computer reconstructions and Virtual Reality environments should be based upon a detailed and systematic analysis of the remains, not only from archaeological and historical data but also from close analysis of the building materials, structural engineering criteria and architectural aspects. Together with written sources and iconography, several hypotheses should be checked against the results and data, and 3D models ‘iterated’ toward the most probable reconstructions. All aspects of the site’s interpretation should be integrated." The ENAME Charter, International Guidelines for Authenticity, Intellectual Integrity, and Sustainable Development in the Public Presentation of Archaeological and Historical Sites and Landscapes, Draft 2 (17 October 2002), Scientific and Professional Guidelines (B, C), Articles 18 and 20.
http://www.heritage.umd.edu/CHRSWeb/Belgium/Proposed%20Charter.htm [3] added to Introduction
B - This is also one problem archaeologists
have with virtual reconstructions, namely that sound knowledge and
scientific hypotheses are very often not clearly separated in beautiful
pictures or CG animations of virtual CH reconstructions.
This is not a virtual reconstruction which is typically single precision math, but a conservation model using double precision math to create digital preservation objects. In the museum conservation approach, our scientific hypotheses are very clearly separated from "beautiful pictures or animations" by abstracted mathematical modeling. Many current virtual constructions are polygons with images pasted on the surfaces in a way that makes them look good. We have great trouble with the fact that these virtual constructions, in part because of current modeling procedures, do not have the precision to carry sound knowledge or a scientific base.
3. in 2.1: "in very small amounts of _physical_ space, ..." done
in very small amounts of physical space, large amounts of data can be stored,
4. in 2.2: "- Archiving digital representations of raw data, and of reconstr..."
5. in 2.3: You present a classification from "Measurements and drafting" to6. in 2.4, Solutions: A Your favorite 'open standards and procedures' issue still waives with 'open source'.
"Volumetric scanning and modeling"; do you have references/concrete
examples showing all classes are relevant?
These are not intended to be a formal classification, but an overview of current techniques in digital modeling of culturally valuable objects.
As a member of the
Digital Preservation
Sub-Committee CIDOC, one of the authors is writing a position
paper on digital
preservation standards and, having studied the
current standards related to digital preservation at length and as a
result of those studies, found that current digital preservation
standards including the established work flows and methods for the
collection of digital data do not take into account the current data
dependencies or the complex nature of digital systems. Furthermore,
these methods for digital preservation are untested and unproven. What
this paper proposes is
the synthetic simulation of objects using precise mathematical
modeling techniques in contrast to the current standards based
on text / meta tags. The use of text based preservation methods
is not sufficient because of the issues of precision and change in
language
over time.
8. in 3.1, "Mathematical cracks": I don't agree. There are (many) bad polygonal
models (or BReps as you say), but this is not a problem of the
representation as such, but of the tools used to create the models. Isn't
it also possible to create bad functional models? (Singularities, instable
or ill conditioned functions)
It is not the problem of the polygon tools, but the polygon representation that allows manipulations that are not mathematically closed. Bad function models can not be created. Singularities can not exist, and instable or ill conditioned functions can not exist without mathematical discovery and remediation; that is the nature of FRep and the use of a CSG tree or history. Polygonal representation has no history or structural volume... Singularities abound in this representation.
9. in 3.3, Extensible operations: "Many modeling operations are closed..."
Are there modeling operations which are not? Clarify!
"without changing its integrity" - adding a buggy function DOES affect
the integrity, doesn't it?
Yes, there are modeling operations that are not closed; these are rigid body motions which do not effect mathematical integrity of a structure. If one makes a buggy program that uses FRep, one will know it because there will be no mathematical closure.10. in 3.4, Lightweight protocol: "The average size of HyperFun files is 5K" -
3.4, Lightweight protocol has been rewritten and an example figure inserted.
11. in 3.5, 3, What is 'transparent learning'?
Short answer: empirical learning without the learner being aware of the learning acquisition process.
example: driving a car
Long answer with regard to transparent learning in a digital context: sustained, long term, intrinsic personal growth obtained through conceptual layering of an immersive, synthetic environment where a person's learning is independent from a given subject and indirectly gained through the virtual experience. Associative thinking is invoked in an event driven environment structured to the improvement in the ability to think in multi-converging determinants and in the application of logical procedures and processes where the content is secondary to the task of stimulating growth in the cortex region of the brain. This kind of emergent learning is digitally based on detailed simulations of complex interactive environments.
example: online computer games
12. in 3.5, Hybrid system: You propose the BRep for vertex editing.
But the BRep is the output of your polygonizer right?
Yes that is correct.
So how would you feed back the changed BRep to the HyperFun description it was generated from?
The polygonal BRep is both a visual and a dimensional representation of the FRep. If you change a vertex of the polygon which represents the FRep object, then a corresponding FRep object representing the difference is created and can be added or subtracted from the model. If the vertex change is within the limits of the primitive structure's integrity, then the primitive's definition may be changed rather than adding or subtracting a difference object.
Is that possible anyways?
Maybe it is and maybe it is not... editing can be done and we will soon find the limitations.
You're also saying that the BRep provides more fine-grained control than HyperFun/FRep models;
No, we are not saying that at all. It is a GUI to the FRep. Just as a raster file of points is a GUI for the modification of a higher grained vector file using object snaps (lookup table of more fine grained data), the low res or course grain visual representation of BRep gives user access to the fine grain or high res data of FRep.
isn't that a draw-back that greatly limits the applicability of your approach?
No more that an imprecise raster image limits the application and manipulations of precision vector data.
Is there an analogy to Euler operators for implicit solids?
Euler operators try to provide mathematical closure for polygonal surfaces. FRep has mathematical closure.
13. in 4.5, Local/Global methods:
Is it possible to derive a Taylor series for an FRep model
analytically?
Or do you resort to a numerical
approximation of the derivative?
It is not analytical. It is
numerical or an automatic difference.
----------------------------------------------
REVIEWER No.2
----------------------------------------------
Paper Number: 02C21
Author(s): C. Vilbrandt, G. Pasko, A. Pasko, P.-A. Fayolle,
T. Vilbrandt, J. R. Goodwin, J. M. Goodwin, T. L. Kunii
Paper Title: Cultural Heritage Preservation Using Constructive Shape Modeling
3. Originality/novelty High
4. Importance Good
5. Technical soundness Good
6. Clarity of writing and presentation Good
7. Clarity and quality of illustrations (if present) Average
8. Do the ACM classification provided corresponds to the paper
topic?
(all papers should be classified using the ACM Computing
Classification System - ACM CCS, found at http://www.acm.org/class )
Yes
9. Should the paper be shortened?
No
10. Overall judgment
Good
11. Recommendation
Accept
----------------------------------------------
REVIEWER No.3
----------------------------------------------
Paper Number: 02C21
Author(s): C. Vilbrandt, G. Pasko, A. Pasko, P.-A. Fayolle,
T. Vilbrandt, J. R. Goodwin, J. M. Goodwin, T. L. Kunii
Paper Title: Cultural Heritage Preservation Using Constructive Shape Modeling
1. Classification of the paper
Practice-and-experience paper (variants,applications, case studies...)
2. Does the paper address computer graphics?
Yes
3. Originality/novelty
Good
4. Importance
Good
5. Technical soundness
Average
6. Clarity of writing and presentation
Good
7. Clarity and quality of illustrations (if present)
Good
8. Do the ACM classification provided corresponds to the paper topic?
(all papers should be classified using the ACM Computing
Classification System - ACM CCS, found at http://www.acm.org/class )
No (none given)
If NO (or if the author has not indicated it), please specify
alternative ACM classification:
I.3.5 Curve, surface, solid, and object representations
.............................................................
9. Should the paper be shortened?
No
10. Overall judgment
Good
11. Recommendation
Accept after minor revision
Information for the Authors
===========================
The new section on fitting a parametrized FRep model to a point cloud
demonstrates a semi-automatic method to process scanned 3D points, which is
important for the proposed data structure to be useful as an archiving tool.
A few comments remain:
1.) The authors state that exact duplicates can be created from digital models.
This is not true, the model creation process suffers from three different kinds
of error (sampling, discretization, and quantization error, see "Progressive
Geometry Compression", Khodakovsky/Schröder/Sweldens, SIGGRAPH 2000),
and reproduction of the model is also subject to mechanical deviations. So there
are at least four independent sources of errors, making an *exact* duplicate
impossible. A more valid statement would be that the total error can be
(arbitrarily?) reduced by increasing technical efforts, but even this is
debatable.
The authors erred in clear writing. We do not mean exact physical duplicates of cultural heritage objects but exact digital duplicates of digital models of cultural heritage objects.
This allows for any number of exact digital duplicates of digitally modeled objects to be stored in various locations, thus providing both public access and security; such is not possible with physical objects.
2.) The authors mention the necessity of a stable storage medium at several
places. Could you give an example of what you consider "as stable as e.g.
Egyptian pyramids", i.e., providing secure storage for thousands of years?
Current mass storage media (magnetic and optical) are clearly out of question
for this purpose.
The words "stable" and "medium" do not appear in the paper, two references to "storage" are in regard to compression and one to changes in storage formats. The issue of secure and stable storage for digital preservation is currently addressed by the automatic refreshment of magnetic and optical data. Write-once read-many-times (WORM) systems are being researched as indirect physical authority for authenticity. Possible containment devices might be based on PEDOT, Sven Moller, et al., "A polymer/semiconductor write-once read-many-times memory," Nature 426, 166 - 169 (13 November 2003); doi:10.1038 / nature02070. http://www.nature.com/cgi-taf/DynaPage.taf?file=/nature/journal/v426/n6963/abs/nature02070_fs.html
In the digital view, the issues of stability are answered by system redundancy and dynamic operations. Certainly a stable storage medium is an important topic for digital preservation but beyond the scope of this paper and a subject for future research.
3.) The authors are still not very specific on file sizes. The average FRep
file size statement in Section 3.4 is confusing, the same is true for the
average VRML file size in Section 4.4. A table comparing the file sizes (VRML,
FRep) of all models discussed in the paper would be helpful.
3.4, Lightweight protocol has been rewritten and an example figure inserted.
The average VRML file size in Section 4.4 has been deleted since VRMl file sizes are quite arbitrary depending on the complexity of the model.
FRep allows a high degree of compression while still accurately describing an object. However, there is a high degree of computational processing needed to uncompress to a viewable VRML file or other viewable format. By comparison, VRML is not a precise, mathematical definiton of an object nor does it offer the same level of compression as FRep.
4.) A formatting remark: Refs. 30 and 31 (last paragraph of Section 2.3) appear
out of order.
Corrected.