Benutzer-Werkzeuge

Webseiten-Werkzeuge


fileformats:json

Unterschiede

Hier werden die Unterschiede zwischen zwei Versionen angezeigt.

Link zu dieser Vergleichsansicht

Beide Seiten der vorigen RevisionVorhergehende Überarbeitung
Nächste Überarbeitung
Vorhergehende Überarbeitung
fileformats:json [2018/01/04 15:41] – [Output of exported motion data] oliverfileformats:json [2018/01/09 10:45] (aktuell) – [Json data format] oliver
Zeile 3: Zeile 3:
  
 **MimeType:** text/x-json **MimeType:** text/x-json
 +
 +**Default File Suffix:** .json
 +
 +**Comments**
 +
 +Reading of json-files is not yet implemented.
  
 ==== Output of processed motion data ==== ==== Output of processed motion data ====
Zeile 48: Zeile 54:
 The Output of motion data exported into the json-format is different to the data saved after processed data: The Output of motion data exported into the json-format is different to the data saved after processed data:
  
-Each line is a valid JSON entry without separator at the end. This is similar to other readers and follows [[http://hadoop.apache.org|Hadoop]] convention. [[http://hadoop.apache.org|Hadoop]] and [[http://spark.apache.org/|Spark]] use this format to to make sure splits work properly in a cluster environment. For those new to [[http://hadoop.apache.org|Hadoop]] file format convention, the reason is a large file can be split into chunks and sent to different nodes in a cluster. If a record spanned multiple lines, split might not get the complete record, which will result in runtime errors and calculation errors. Where and how a job splits a file varies depending on the job configuration and cluster size.+Each line is a valid JSON entry without separator at the end. This is similar to other readers and follows [[http://hadoop.apache.org|Hadoop]] convention. [[http://hadoop.apache.org|Hadoop]] and [[http://spark.apache.org/|Spark]] use this format to make suresplits work properly in a cluster environment. For those new to [[http://hadoop.apache.org|Hadoop]] file format convention, the reason is a large file can be split into chunks and sent to different nodes in a cluster. If a record spanned multiple lines, split might not get the complete record, which will result in runtime errors and calculation errors. Where and how a job splits a file varies depending on the job configuration and cluster size. 
 + 
 +=== Example === 
 + 
 +<code> 
 +{"T":3.2416666666666667,"ddHeight":14.934731608645043} 
 +{"T":3.25,"ddHeight":15.36967956700511} 
 +{"T":3.2583333333333333,"ddHeight":18.539007468953184} 
 +{"T":3.2666666666666666,"ddHeight":25.623373268847278} 
 +{"T":3.275,"ddHeight":35.752522626800086} 
 +{"T":3.283333333333333,"ddHeight":46.12983110537536} 
 +{"T":3.2916666666666665,"ddHeight":52.955837424588786} 
 +{"T":3.3,"ddHeight":52.67600757207088} 
 +{"T":3.308333333333333,"ddHeight":43.10764808456153} 
 +{"T":3.3166666666666664,"ddHeight":24.31045834393331} 
 +{"T":3.3249999999999997,"ddHeight":-0.8832559527070716} 
 +{"T":3.3333333333333335,"ddHeight":-27.13078708811163} 
 +{"T":3.341666666666667,"ddHeight":-48.096471956424814} 
 +{"T":3.35,"ddHeight":-58.79646541630221} 
 +{"T":3.3583333333333334,"ddHeight":-57.41101652232201} 
 +{"T":3.3666666666666667,"ddHeight":-45.426316458127665} 
 +{"T":3.375,"ddHeight":-26.23458332211291} 
 +{"T":3.3833333333333333,"ddHeight":-3.462568987803557} 
 +</code>
  
-If in the Export UI the labelset and the labelgroup is defined, than the labelgroup name is intepreted as a phase type name and only the frames of the given phase is exported.+If in the Export UI the "labelset name" and the "labelgroup name" is defined, than the "labelgroup nameis interpreted as a phase type name and only the frames of the given phase are exported.
fileformats/json.1515076862.txt.gz · Zuletzt geändert: 2018/01/04 15:41 von oliver

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki