<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.tei-c.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Desmond</id>
	<title>TEIWiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.tei-c.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Desmond"/>
	<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=Special:Contributions/Desmond"/>
	<updated>2026-04-21T13:41:10Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.32.0</generator>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=Talk:GeneticEditionDraf1Comments&amp;diff=6668</id>
		<title>Talk:GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=Talk:GeneticEditionDraf1Comments&amp;diff=6668"/>
		<updated>2009-05-29T00:17:31Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;It would be interesting to get some reactions to these comments. Otherwise it would be fair to assume that these points are conceded.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6587</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6587"/>
		<updated>2009-05-24T11:50:56Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts, which are primarily non-hierarchical in structure. Is it really possible any longer, for texts that will be subject to anything from mild to extreme overlap, to propose a standard for the future that essentially ignores the overlap problem? The past twenty years of research on this topic cannot be so lightly set aside. &lt;br /&gt;
&lt;br /&gt;
The proposal also does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how can such a mechanism work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, how accurately they can represent the textual phenomena, and how efficiently they will work in software.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6586</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6586"/>
		<updated>2009-05-24T11:48:47Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts, which are primarily non-hierarchical in structure. Is it really possible any longer, for texts that will be subject to anything from mild to extreme overlap, to propose a standard for the future that essentially ignores the overlap problem? The past twenty years of research on this topic cannot be so lightly set aside.&lt;br /&gt;
&lt;br /&gt;
The proposal also does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how can such a mechanism work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, how accurately they can represent the textual phenomena, and how efficiently they will work in software.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6585</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6585"/>
		<updated>2009-05-24T11:48:26Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts, which are primarily non-hierarchical in structure. Is it really possible any longer, for texts that will be subject to anything from mild to extreme overlap, to propose a standard for the future that essentially ignores the overlap problem? The past twenty years of research on this topic cannot so lightly be set aside.&lt;br /&gt;
&lt;br /&gt;
The proposal also does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how can such a mechanism work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, how accurately they can represent the textual phenomena, and how efficiently they will work in software.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6584</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6584"/>
		<updated>2009-05-24T11:47:47Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts, which are primarily non-hierarchical in structure. Is it really possible any longer, for texts that will be subject to anything from mild to extreme overlap, to propose a standard for the future that essentially ignores the overlap problem? The past twenty years of research on this topic cannot simply be set aside.&lt;br /&gt;
&lt;br /&gt;
The proposal also does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how can such a mechanism work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, how accurately they can represent the textual phenomena, and how efficiently they will work in software.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6583</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6583"/>
		<updated>2009-05-24T08:19:13Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts, which are primarily non-hierarchical in structure. Is it really possible any more, for texts that will be subject to anything from mild to extreme overlap, to propose a standard for the future that essentially ignores the overlap problem? The past twenty years of research cannot simply be ignored.&lt;br /&gt;
&lt;br /&gt;
The proposal also does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how can such a mechanism work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, how accurately they can represent the textual phenomena, and how efficiently they will work in software.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6582</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6582"/>
		<updated>2009-05-24T08:18:41Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts, which are primarily non-hierarchical in structure. Is it really possible any more, for texts that will be subject to anything from mild to extreme overlap, to propose a standard for the future that essentially ignores the overlap problem? The past twenty years of research into that cannot simply be ignored.&lt;br /&gt;
&lt;br /&gt;
The proposal also does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how can such a mechanism work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, how accurately they can represent the textual phenomena, and how efficiently they will work in software.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6581</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6581"/>
		<updated>2009-05-22T22:04:21Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts, which are primarily non-hierarchical in structure. &lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how can such a mechanism work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, how accurately they can represent the textual phenomena, and how efficiently they will work in software.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6563</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6563"/>
		<updated>2009-05-22T07:30:26Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts, which are primarily non-hierarchical in structure. &lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how can such a mechanism work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, how accurately they represent the textual phenomena, and how efficiently they will work in software.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6536</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6536"/>
		<updated>2009-05-21T23:28:05Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts, which are primarily non-hierarchical in structure. &lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how can such a mechanism work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, how accurately they represent the textual phenomena, and how efficiently they work in software.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=Talk:GeneticEditionDraf1Comments&amp;diff=6535</id>
		<title>Talk:GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=Talk:GeneticEditionDraf1Comments&amp;diff=6535"/>
		<updated>2009-05-21T22:49:01Z</updated>

		<summary type="html">&lt;p&gt;Desmond: New page: It would be interesting to get some reactions to these comments.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;It would be interesting to get some reactions to these comments.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6534</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6534"/>
		<updated>2009-05-21T22:42:53Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts, which are primarily non-hierarchical in structure. The excessive use of linking between XML elements to represent complex textual phenomena, as described in the proposal and its supporting documentation, can only result in spaghetti-like markup that will be difficult to edit, excessively complex, incomputable, and inadequate in its representation of the data.&lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how can such a mechanism work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, how accurately they represent the textual phenomena, and how efficiently they work in software.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6533</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6533"/>
		<updated>2009-05-21T22:12:30Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts, which are primarily non-hierarchical in structure. The excessive use of linking between XML elements to represent complex textual phenomena, as described in the proposal and its supporting documentation, can only result in spaghetti-like markup that will be difficult to edit, excessively complex, incomputable, and inadequate in its representation of the data.&lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how can such a mechanism work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, how accurately they represent the textual phenomena, and how efficiently they work in software.&lt;br /&gt;
&lt;br /&gt;
The public discussion of this draft is also a little underwhelming. Most of those who will be expected to use the encoding guidelines for genetic editions will have had no say in its development. In this Web 2.0 age at least an open, online discussion forum would be normal. Instead we have a small group of academics who discuss the contents behind closed doors. End users may perhaps be excused for ignoring a result which is not subject to true peer review.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6532</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6532"/>
		<updated>2009-05-21T22:12:12Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts, which are primarily non-hierarchical in structure. The excessive use of linking between XML elements to represent complex textual phenomena, as described in the proposal and its supporting documentation, can only result in spaghetti-like markup that will be difficult to edit, excessively complex, incomputable, and inadequate in its representation of the data.&lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how can such a mechanism work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, how accurately they will represent the textual phenomena, and how efficiently they will work in software.&lt;br /&gt;
&lt;br /&gt;
The public discussion of this draft is also a little underwhelming. Most of those who will be expected to use the encoding guidelines for genetic editions will have had no say in its development. In this Web 2.0 age at least an open, online discussion forum would be normal. Instead we have a small group of academics who discuss the contents behind closed doors. End users may perhaps be excused for ignoring a result which is not subject to true peer review.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6531</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6531"/>
		<updated>2009-05-21T22:11:47Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts, which are primarily non-hierarchical in structure. The excessive use of linking between XML elements to represent complex textual phenomena, as described in the proposal and its supporting documentation, can only result in spaghetti-like markup that will be difficult to edit, excessively complex, incomputable, and inadequate in its representation of the data.&lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how can such a mechanism work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, how accurately they represent the textual phenomena, and how efficiently they will work in software.&lt;br /&gt;
&lt;br /&gt;
The public discussion of this draft is also a little underwhelming. Most of those who will be expected to use the encoding guidelines for genetic editions will have had no say in its development. In this Web 2.0 age at least an open, online discussion forum would be normal. Instead we have a small group of academics who discuss the contents behind closed doors. End users may perhaps be excused for ignoring a result which is not subject to true peer review.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6530</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6530"/>
		<updated>2009-05-21T20:55:39Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts, which are primarily non-hierarchical in structure. The excessive use of linking between XML elements to represent complex textual phenomena, as described in the proposal and its supporting documentation, can only result in spaghetti-like markup that will be difficult to edit, excessively complex, incomputable, and inadequate in its representation of the data.&lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how can such a mechanism work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, what functional advantage they expect to result from the proposed modifications, and how efficiently they will work in software.&lt;br /&gt;
&lt;br /&gt;
The public discussion of this draft is also a little underwhelming. Most of those who will be expected to use the encoding guidelines for genetic editions will have had no say in its development. In this Web 2.0 age at least an open, online discussion forum would be normal. Instead we have a small group of academics who discuss the contents behind closed doors. End users may perhaps be excused for ignoring a result which is not subject to true peer review.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6529</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6529"/>
		<updated>2009-05-21T20:55:15Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts, which are primarily non-hierarchical in structure. The excessive use of linking between XML elements to represent complex textual phenomena, as described in the proposal and its supporting documentation, can only result in spaghetti-like markup that will be difficult to edit, excessively complex, incomputable, and inadequate in its representation of the data.&lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how can such a mechanism work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to result from the proposed modifications, and how efficiently they will work in software.&lt;br /&gt;
&lt;br /&gt;
The public discussion of this draft is also a little underwhelming. Most of those who will be expected to use the encoding guidelines for genetic editions will have had no say in its development. In this Web 2.0 age at least an open, online discussion forum would be normal. Instead we have a small group of academics who discuss the contents behind closed doors. End users may perhaps be excused for ignoring a result which is not subject to true peer review.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6528</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6528"/>
		<updated>2009-05-21T20:54:46Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts, which are primarily non-hierarchical in structure. The excessive use of linking between XML elements to represent complex textual phenomena, as described in the proposal and its supporting documentation, can only result in spaghetti-like markup that will be difficult to edit, excessively complex, incomputable, and inadequate in its representation of the data.&lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how can such a mechanism work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to result from the proposed modifications, and how they will work in software.&lt;br /&gt;
&lt;br /&gt;
The public discussion of this draft is also a little underwhelming. Most of those who will be expected to use the encoding guidelines for genetic editions will have had no say in its development. In this Web 2.0 age at least an open, online discussion forum would be normal. Instead we have a small group of academics who discuss the contents behind closed doors. End users may perhaps be excused for ignoring a result which is not subject to true peer review.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6527</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6527"/>
		<updated>2009-05-21T20:51:03Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts, which are primarily non-hierarchical in structure. The excessive use of linking between XML elements to represent complex textual phenomena, as described in the proposal and its supporting documentation, can only result in spaghetti-like markup that will be difficult to edit, excessively complex, incomputable, and inadequate in its representation of the data.&lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how can such a mechanism work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to result from the proposed modifications.&lt;br /&gt;
&lt;br /&gt;
The public discussion of this draft is also a little underwhelming. Most of those who will be expected to use the encoding guidelines for genetic editions will have had no say in its development. In this Web 2.0 age at least an open, online discussion forum would be normal. Instead we have a small group of academics who discuss the contents behind closed doors. End users may perhaps be excused for ignoring a result which is not subject to true peer review.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6526</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6526"/>
		<updated>2009-05-21T20:50:41Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts, which are primarily non-hierarchical. The excessive use of linking between XML elements to represent complex textual phenomena, as described in the proposal and its supporting documentation, can only result in spaghetti-like markup that will be difficult to edit, excessively complex, incomputable, and inadequate in its representation of the data.&lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how can such a mechanism work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to result from the proposed modifications.&lt;br /&gt;
&lt;br /&gt;
The public discussion of this draft is also a little underwhelming. Most of those who will be expected to use the encoding guidelines for genetic editions will have had no say in its development. In this Web 2.0 age at least an open, online discussion forum would be normal. Instead we have a small group of academics who discuss the contents behind closed doors. End users may perhaps be excused for ignoring a result which is not subject to true peer review.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6525</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6525"/>
		<updated>2009-05-21T20:43:08Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts until it is recognised that they are primarily non-hierarchical. The excessive use of linking between XML elements to represent complex textual phenomena, as described in the proposal and its supporting documentation, can only result in spaghetti-like markup that will be difficult to edit, excessively complex, incomputable, and inadequate in its representation of the data.&lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how can such a mechanism work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to result from the proposed modifications.&lt;br /&gt;
&lt;br /&gt;
The public discussion of this draft is also a little underwhelming. Most of those who will be expected to use the encoding guidelines for genetic editions will have had no say in its development. In this Web 2.0 age at least an open, online discussion forum would be normal. Instead we have a small group of academics who discuss the contents behind closed doors. End users may perhaps be excused for ignoring a result which is not subject to true peer review.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6524</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6524"/>
		<updated>2009-05-21T20:42:18Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts until it is recognised that they are primarily non-hierarchical. The excessive use of linking between XML elements to represent complex textual phenomena, as described in the proposal and its supporting documentation, can only result in spaghetti-like markup that will be difficult to edit, excessively complex, uncomputable, and inadequate in its representation of the data.&lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how can such a mechanism work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to result from the proposed modifications.&lt;br /&gt;
&lt;br /&gt;
The public discussion of this draft is also a little underwhelming. Most of those who will be expected to use the encoding guidelines for genetic editions will have had no say in its development. In this Web 2.0 age at least an open, online discussion forum would be normal. Instead we have a small group of academics who discuss the contents behind closed doors. End users may perhaps be excused for ignoring a result which is not subject to true peer review.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6523</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6523"/>
		<updated>2009-05-21T20:39:53Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts until it is recognised that they are primarily non-hierarchical. The proposal's excessive use of linking between XML elements to represent complex textual phenomena can only result in spaghetti-like markup that will be difficult to edit, excessively complex, uncomputable, and inadequate in its representation of the data.&lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how can such a mechanism work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to result from the proposed modifications.&lt;br /&gt;
&lt;br /&gt;
The public discussion of this draft is also a little underwhelming. Most of those who will be expected to use the encoding guidelines for genetic editions will have had no say in its development. In this Web 2.0 age at least an open, online discussion forum would be normal. Instead we have a small group of academics who discuss the contents behind closed doors. End users may perhaps be excused for ignoring a result which is not subject to true peer review.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6522</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6522"/>
		<updated>2009-05-21T20:38:59Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts until it is recognised that they are primarily non-hierarchical. The proposal's excessive use of linking between XML elements to represent complex textual phenomena can only result in spaghetti-like markup that will be difficult to edit, excessively complex, uncomputable, and inadequate in its representation of the data.&lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how can such a mechanism work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to result from the proposed modifications.&lt;br /&gt;
&lt;br /&gt;
The public discussion of this draft is also a little underwhelming. Those who will be expected to use the encoding guidelines for genetic editions will have had no say in its development. In this Web 2.0 age at least an open, online discussion forum would be normal. Instead we have a small group of academics who discuss the contents behind closed doors. End users may perhaps be excused for ignoring a result which is not subject to true peer review.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6521</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6521"/>
		<updated>2009-05-21T20:37:03Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts until it is recognised that they are primarily non-hierarchical. The proposal's excessive use of linking between XML elements to represent complex textual phenomena can only result in spaghetti-like markup that will be difficult to edit, excessively complex, and provide an inaccurate representation of the data.&lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how can such a mechanism work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to result from the proposed modifications.&lt;br /&gt;
&lt;br /&gt;
The public discussion of this draft is also a little underwhelming. Those who will be expected to use the encoding guidelines for genetic editions will have had no say in its development. In this Web 2.0 age at least an open, online discussion forum would be normal. Instead we have a small group of academics who discuss the contents behind closed doors. End users may perhaps be excused for ignoring a result which is not subject to true peer review.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6520</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6520"/>
		<updated>2009-05-21T20:33:30Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts until it is recognised that they are primarily non-hierarchical. &lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how can such a mechanism work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to result from the proposed modifications.&lt;br /&gt;
&lt;br /&gt;
The public discussion of this draft is also a little underwhelming. Those who will be expected to use the encoding guidelines for genetic editions will have had no say in its development. In this Web 2.0 age at least an open, online discussion forum would be normal. Instead we have a small group of academics who discuss the contents behind closed doors. End users may perhaps be excused for ignoring a result which is not subject to true peer review.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6519</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6519"/>
		<updated>2009-05-21T20:32:13Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts until it is recognised that they are primarily non-hierarchical. &lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how can such a mechanism work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to result from the proposed modifications.&lt;br /&gt;
&lt;br /&gt;
The public discussion of this draft is also a little underwhelming. Those who will be expected to use the encoding guidelines for genetic editions will have had no say in its development. In this Web 2.0 age at least an open and online discussion forum would be normal. Instead we have a small group of academics who discuss the contents behind closed doors. End users may perhaps be excused for ignoring a result which is not subject to true peer review.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6518</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6518"/>
		<updated>2009-05-21T20:31:15Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts until it is recognised that they are primarily non-hierarchical. &lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how can such a mechanism work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to result from the proposed modifications.&lt;br /&gt;
&lt;br /&gt;
The public discussion of this draft is also a little underwhelming. Those who will be expected to use the encoding guidelines for genetic editions will have had no say in its development. In this Web 2.0 age at least an open discussion forum would be normal. Instead we have a small group of academics who discuss the contents behind closed doors. End users may perhaps be excused for ignoring a result which is not subject to true peer review.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6517</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6517"/>
		<updated>2009-05-21T20:30:46Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts until it is recognised that they are primarily non-hierarchical. &lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how can such a mechanism work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to result from the proposed modifications.&lt;br /&gt;
&lt;br /&gt;
The public discussion of this draft is also a little underwhelming. Those who will be expected to use the encoding guidelines for genetic editions will have had no say in its development. In this Web 2.0 age at least an open discussion forum would be normal. Instead we have a small group of academics who discuss the contents behind closed doors. End users may perhaps be excused for ignoring the result which is not subject to true peer review.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6516</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6516"/>
		<updated>2009-05-21T20:28:23Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts until it is recognised that they are primarily non-hierarchical. &lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how can such a mechanism work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to result from the proposed modifications.&lt;br /&gt;
&lt;br /&gt;
The public discussion of this draft is also a little underwhelming. Those who will be expected to use the encoding guidelines for genetic editions will have had no say in its development. In this Web 2.0 age at least an open discussion forum would be normal. Otherwise the end users may perhaps be excused for ignoring it.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6511</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6511"/>
		<updated>2009-05-20T21:25:58Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts until it is recognised that they are primarily non-hierarchical. &lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how can such a mechanism work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to result from the proposed modifications.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6510</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6510"/>
		<updated>2009-05-20T21:25:33Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts until it is recognised that they are primarily non-hierarchical. &lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how can such a mechanism work ''between'' documents, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to result from the proposed modifications.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6505</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6505"/>
		<updated>2009-05-19T20:51:54Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts until it is recognised that they are primarily non-hierarchical. &lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how is such a mechanism to work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to result from the proposed modifications.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6504</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6504"/>
		<updated>2009-05-19T20:49:13Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts until it is recognised that they are primarily non-hierarchical. It unfortunately is no longer possible, as in this proposal, to ignore the overlap problem.&lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how is such a mechanism to work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to result from the proposed modifications.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6503</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6503"/>
		<updated>2009-05-19T20:48:34Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts until it is recognised that they are primarily non-hierarchical. It is no longer possible, as the authors of this proposal do, to ignore the overlap problem.&lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how is such a mechanism to work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to result from the proposed modifications.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6502</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6502"/>
		<updated>2009-05-19T20:48:05Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts until it is recognised that they are primarily non-hierarchical. It is no longer possible to ignore the overlap problem.&lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how is such a mechanism to work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to result from the proposed modifications.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6501</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6501"/>
		<updated>2009-05-19T07:20:49Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these complex genetic texts until it is recognised that they are primarily non-hierarchical. &lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how is such a mechanism to work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to result from the proposed modifications.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6500</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6500"/>
		<updated>2009-05-19T07:19:25Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these genetic texts until it is recognised that they are primarily non-hierarchical. &lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how is such a mechanism to work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to result from the proposed modifications.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6499</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6499"/>
		<updated>2009-05-19T07:18:55Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these genetic texts until it is recognised that they are fundamentally, and primarily, non-hierarchical. &lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how is such a mechanism to work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to result from the proposed modifications.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6498</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6498"/>
		<updated>2009-05-19T05:53:22Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these genetic texts until it is recognised that they are fundamentally non-hierarchical. &lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how is such a mechanism to work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to result from the proposed modifications.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6497</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6497"/>
		<updated>2009-05-19T05:51:06Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these genetic texts until it is recognised that they are fundamentally non-hierarchical. &lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is intended to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how is such a mechanism to work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to derive from the proposed modifications.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6496</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6496"/>
		<updated>2009-05-19T01:27:40Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these genetic texts until it is recognised that they are fundamentally non-hierarchical. &lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is possible to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how is such a mechanism to work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to derive from the proposed modifications.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6495</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6495"/>
		<updated>2009-05-19T01:18:57Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. The document-centric approach is a simple reproduction of the source documents, which fails to take full advantage of the digital medium. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these genetic texts until it is recognised that they are fundamentally non-hierarchical. &lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is possible to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how is such a mechanism to work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to derive from the proposed modifications.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6494</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6494"/>
		<updated>2009-05-19T00:42:07Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. The document-centric approach is a naive reproduction of the source documents, which fails to take full advantage of the digital medium. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these genetic texts until it is recognised that they are fundamentally non-hierarchical. &lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is possible to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how is such a mechanism to work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to derive from the proposed modifications.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6493</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6493"/>
		<updated>2009-05-19T00:23:26Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. This document-centric approach is a naive reproduction of the source documents, which fails to take full advantage of the digital medium. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these genetic texts until it is recognised that they are fundamentally non-hierarchical. &lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is possible to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how is such a mechanism to work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to derive from the proposed modifications.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6492</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6492"/>
		<updated>2009-05-19T00:22:46Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. This document-centric approach is a naive copying of the source documents, which fails to take full advantage of the digital medium. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these genetic texts until it is recognised that they are fundamentally non-hierarchical. &lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is possible to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how is such a mechanism to work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to derive from the proposed modifications.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6491</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6491"/>
		<updated>2009-05-18T22:13:51Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. The document centric approach is a naive copy of the source documents, which fails to take full advantage of the digital medium. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these genetic texts until it is recognised that they are fundamentally non-hierarchical. &lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is possible to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how is such a mechanism to work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to derive from the proposed modifications.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6490</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6490"/>
		<updated>2009-05-18T22:13:07Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. The document centric approach is thus a naive copy of the source documents, which fails to take full advantage of the digital medium. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these genetic texts until it is recognised that they are fundamentally non-hierarchical. &lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is possible to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how is such a mechanism to work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to derive from the proposed modifications.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6489</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6489"/>
		<updated>2009-05-18T20:39:08Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. The document centric approach is a naive copy of the source documents, which fails to take full advantage of the digital medium. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' assumes that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these genetic texts until it is recognised that they are fundamentally non-hierarchical. &lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is possible to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how is such a mechanism to work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to derive from the proposed modifications.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6488</id>
		<title>GeneticEditionDraf1Comments</title>
		<link rel="alternate" type="text/html" href="https://wiki.tei-c.org/index.php?title=GeneticEditionDraf1Comments&amp;diff=6488"/>
		<updated>2009-05-18T20:25:14Z</updated>

		<summary type="html">&lt;p&gt;Desmond: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Because it is difficult to record many versions in one file using markup, the proposal recommends a document-centric approach. In this method each physical document is encoded separately, even when they are just drafts of the one text. As a result there is a great deal of redundant information in their representation. This only serves to increase the work of editors and software in maintaining copies of text that are supposed to be linked or identical. It would be much more efficient and simpler to represent each instance of a piece of text that occurs exactly once in a work by a unique piece of text. The document centric approach is a naive copy of the source documents, which fails to take full advantage of the digital medium. &lt;br /&gt;
&lt;br /&gt;
The section on 'grouping changes' implies that manuscript texts have a structure that can be broken down into a hierarchy of changes that can be conveniently grouped and nested arbitrarily. Similarly in section 4.1 a strict hierarchy is imposed consisting of document-&amp;gt;writing surface-&amp;gt;zone-&amp;gt;line. Since Barnard's paper in 1988 where he pointed out the inherent failure of markup to adequately represent a trivial case of nested speeches and lines in Shakespeare, the problem of overlap has become the dominant issue in the digital encoding of historical texts. This representation, which seeks to reassert the OHCO thesis, which has been withdrawn by its own authors, will fail to adequately represent these genetic texts until it is recognised that they are fundamentally non-hierarchical. &lt;br /&gt;
&lt;br /&gt;
The proposal does not explain how it is possible to 'collate' XML documents arranged in this structure, especially when the variants are distributed via two mechanisms: as markup in individual files and also as links between documentary versions. Collation programs work by comparing basically plain text files, containing only light markup for references in COCOA or empty XML elements (as in the case of Juxta). The virtual absence of collation programs able to process arbitrary XML renders this proposal at least very difficult to achieve. It would be better if a purely digital representation of the text were the objective, since in this case, an apparatus would not be needed.&lt;br /&gt;
&lt;br /&gt;
The mechanism for transposition as described also sounds infeasible. It is unclear what is meant by the proposed standoff mechanism. However, if this allows chunks of transposed text to be moved around, this will fail if the chunks contain non-well-formed markup or if the destination location does not permit that markup in the schema at that point. Also if transpositions between physical versions are allowed - and this actually comprises the majority of cases - how is such a mechanism to work, especially when transposed chunks may well overlap?&lt;br /&gt;
&lt;br /&gt;
The main advantage claimed for HNML and LEG/GML (=Genetic Markup Language) is that they are more succinct than a TEI encoding. If the proposed markup encoding standard is incorporated into TEI, however, this advantage will be lost. The proposed codes will just become part of the more generic, and hence more verbose, TEI language. There seems very little in the sketched proposals here that cannot already be encoded in the TEI Guidelines as they currently stand. The authors should spell out clearly which elements and attributes in their view need to be added, and what functional advantage they expect to derive from the proposed modifications.&lt;/div&gt;</summary>
		<author><name>Desmond</name></author>
		
	</entry>
</feed>