{"id":59706,"date":"2016-08-25T16:00:31","date_gmt":"2016-08-25T13:00:31","guid":{"rendered":"https:\/\/www.javacodegeeks.com\/?p=59706"},"modified":"2016-08-24T23:43:25","modified_gmt":"2016-08-24T20:43:25","slug":"apache-spark-packages-xml-json","status":"publish","type":"post","link":"https:\/\/www.javacodegeeks.com\/2016\/08\/apache-spark-packages-xml-json.html","title":{"rendered":"Apache Spark Packages, from XML to JSON"},"content":{"rendered":"<p>The Apache Spark community has put a lot of effort into extending Spark. Recently, we wanted to transform an XML dataset into something that was easier to query. We were mainly interested in doing data exploration on top of the billions of transactions that we get every day. XML is a well-known format, but sometimes it can be complicated to work with. In Apache Hive, for instance, we could define the structure of the schema of our XML and then query it using SQL.<\/p>\n<p>However, it was hard for us to keep up with the changes on the XML structure, so the previous option was discarded. We were using <a href=\"\/products\/product-overview\/apache-spark-streaming\">Spark Streaming<\/a> capabilities to bring these transactions to our cluster, and we were thinking of doing the required transformations within Spark. However, the same problem remained, as we had to change our Spark application every time the XML structure changed.<\/p>\n<p>There must be another way!<\/p>\n<p>There is an Apache Spark package from the community that we could use to solve these problems. In this blog post, I&#8217;ll walk you through how to use an Apache Spark package from the community to read any XML file into a DataFrame.<\/p>\n<p>Let\u2019s load the Spark shell and see an example:<\/p>\n<pre class=\" brush:java\">.\/spark-shell\u200a\u2014\u200apackages com.databricks:spark-xml_2.10:0.3.3<\/pre>\n<p>In here, we just added the XML package to our Spark environment. This of course can be added when writing a Spark app and packaging it into a jar file.<\/p>\n<p>Using the package, we can read any XML file into a DataFrame. When loading the DataFrame, we could specify the schema of our data, but this was our main concern in the first place, so we will let Spark infer it. The inference of the DataFrame schema is a very powerful trick since we don\u2019t need to know the schema anymore so it can change at any time.<div style=\"display:inline-block; margin: 15px 0;\"> <div id=\"adngin-JavaCodeGeeks_incontent_video-0\" style=\"display:inline-block;\"><\/div> <\/div><\/p>\n<p>Let\u2019s see how we load our XML files into a DataFrame:<\/p>\n<pre class=\"brush:java\">val df = sqlContext\r\n          .read\r\n          .format(\"com.databricks.spark.xml\")\r\n          .option(\"rowTag\", \"OrderSale\")\r\n          .load(\"~\/transactions_xml_folder\/\")\r\n          \r\ndf.printSchema<\/pre>\n<p>Printing the DataFrame schema gives us an idea of what the inference system has done.<\/p>\n<pre class=\"brush:java\">root\r\n |-- @ApplicationVersion: string (nullable = true)\r\n |-- @BusinessDate: string (nullable = true)\r\n |-- @Change: double (nullable = true)\r\n |-- @EmployeeId: long (nullable = true)\r\n |-- @EmployeeName: string (nullable = true)\r\n |-- @EmployeeUserId: long (nullable = true)\r\n |-- @MealLocation: long (nullable = true)\r\n |-- @MessageId: string (nullable = true)\r\n |-- @OrderNumber: long (nullable = true)\r\n |-- @OrderSourceTypeId: long (nullable = true)\r\n |-- @PosId: long (nullable = true)\r\n |-- @RestaurantType: long (nullable = true)\r\n |-- @SatelliteNumber: long (nullable = true)\r\n |-- @SpmHostOrderCode: string (nullable = true)\r\n |-- @StoreNumber: long (nullable = true)\r\n |-- @TaxAmount: double (nullable = true)\r\n |-- @TaxExempt: boolean (nullable = true)\r\n |-- @TaxInclusiveAmount: double (nullable = true)\r\n |-- @TerminalNumber: long (nullable = true)\r\n |-- @TimeZoneName: string (nullable = true)\r\n |-- @TransactionDate: string (nullable = true)\r\n |-- @TransactionId: long (nullable = true)\r\n |-- @UTCOffSetMinutes: long (nullable = true)\r\n |-- @Version: double (nullable = true)\r\n |-- Items: struct (nullable = true)\r\n |    |-- MenuItem: struct (nullable = true)\r\n |    |    |-- #VALUE: string (nullable = true)\r\n |    |    |-- @AdjustedPrice: double (nullable = true)\r\n |    |    |-- @CategoryDescription: string (nullable = true)\r\n |    |    |-- @DepartmentDescription: string (nullable = true)\r\n |    |    |-- @Description: string (nullable = true)\r\n |    |    |-- @DiscountAmount: double (nullable = true)\r\n |    |    |-- @Id: long (nullable = true)\r\n |    |    |-- @PLU: long (nullable = true)\r\n |    |    |-- @PointsRedeemed: long (nullable = true)\r\n |    |    |-- @Price: double (nullable = true)\r\n |    |    |-- @PriceLessIncTax: double (nullable = true)\r\n |    |    |-- @PriceOverride: boolean (nullable = true)\r\n |    |    |-- @ProductivityUnitQuantity: double (nullable = true)\r\n |    |    |-- @Quantity: long (nullable = true)\r\n |    |    |-- @TaxAmount: double (nullable = true)\r\n |    |    |-- @TaxInclusiveAmount: double (nullable = true)\r\n |-- OrderTaxes: struct (nullable = true)\r\n |    |-- TaxByImposition: struct (nullable = true)\r\n |    |    |-- #VALUE: string (nullable = true)\r\n |    |    |-- @Amount: double (nullable = true)\r\n |    |    |-- @ImpositionId: long (nullable = true)\r\n |    |    |-- @ImpositionName: string (nullable = true)\r\n |-- Payments: struct (nullable = true)\r\n |    |-- Payment: struct (nullable = true)\r\n |    |    |-- #VALUE: string (nullable = true)\r\n |    |    |-- @AccountIDLast4: string (nullable = true<\/pre>\n<p>At this point, we could use any SQL tool to query our XML using Spark SQL. Please read this post (<a href=\"https:\/\/medium.com\/@anicolaspp\/apache-spark-as-a-distributed-sql-engine-4373e254e0f9#.w77z4ml3r\" target=\"_blank\">Apache Spark as a Distributed SQL Engine<\/a>) to learn more about Spark SQL. Going a step further, we could use tools that can read data in JSON format. Having JSON datasets are especially useful if you have something like <a href=\"\/products\/apache-drill\">Apache Drill<\/a>.<\/p>\n<p>If you have any questions about using this Apache Spark package to read XML files into a DataFrame, please ask them in the comments section below.<img decoding=\"async\" src=\"\/\/www.bizographics.com\/collect\/?pid=6857&amp;fmt=gif\" width=\"1\" height=\"1\" \/><\/p>\n<div class=\"attribution\">\n<table>\n<tbody>\n<tr>\n<td><span class=\"reference\">Reference: <\/span><\/td>\n<td><a href=\"https:\/\/www.mapr.com\/blog\/apache-spark-packages-xml-json\">Apache Spark Packages, from XML to JSON<\/a> from our <a href=\"http:\/\/www.javacodegeeks.com\/join-us\/jcg\/\">JCG partner<\/a> Chase Hooley at the <a href=\"http:\/\/www.mapr.com\/blog\">Mapr<\/a> blog.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>The Apache Spark community has put a lot of effort into extending Spark. Recently, we wanted to transform an XML dataset into something that was easier to query. We were mainly interested in doing data exploration on top of the billions of transactions that we get every day. XML is a well-known format, but sometimes &hellip;<\/p>\n","protected":false},"author":858,"featured_media":22307,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[8],"tags":[1092],"class_list":["post-59706","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-enterprise-java","tag-apache-spark"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.5 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Apache Spark Packages, from XML to JSON - Java Code Geeks<\/title>\n<meta name=\"description\" content=\"The Apache Spark community has put a lot of effort into extending Spark. Recently, we wanted to transform an XML dataset into something that was easier to\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.javacodegeeks.com\/2016\/08\/apache-spark-packages-xml-json.html\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Apache Spark Packages, from XML to JSON - Java Code Geeks\" \/>\n<meta property=\"og:description\" content=\"The Apache Spark community has put a lot of effort into extending Spark. Recently, we wanted to transform an XML dataset into something that was easier to\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.javacodegeeks.com\/2016\/08\/apache-spark-packages-xml-json.html\" \/>\n<meta property=\"og:site_name\" content=\"Java Code Geeks\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/javacodegeeks\" \/>\n<meta property=\"article:published_time\" content=\"2016-08-25T13:00:31+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.javacodegeeks.com\/wp-content\/uploads\/2014\/03\/apache-spark-logo.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"150\" \/>\n\t<meta property=\"og:image:height\" content=\"150\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Chase Hooley\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@javacodegeeks\" \/>\n<meta name=\"twitter:site\" content=\"@javacodegeeks\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Chase Hooley\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.javacodegeeks.com\\\/2016\\\/08\\\/apache-spark-packages-xml-json.html#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.javacodegeeks.com\\\/2016\\\/08\\\/apache-spark-packages-xml-json.html\"},\"author\":{\"name\":\"Chase Hooley\",\"@id\":\"https:\\\/\\\/www.javacodegeeks.com\\\/#\\\/schema\\\/person\\\/753b337046402965a8ae99fe80985820\"},\"headline\":\"Apache Spark Packages, from XML to JSON\",\"datePublished\":\"2016-08-25T13:00:31+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.javacodegeeks.com\\\/2016\\\/08\\\/apache-spark-packages-xml-json.html\"},\"wordCount\":446,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/www.javacodegeeks.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/www.javacodegeeks.com\\\/2016\\\/08\\\/apache-spark-packages-xml-json.html#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.javacodegeeks.com\\\/wp-content\\\/uploads\\\/2014\\\/03\\\/apache-spark-logo.jpg\",\"keywords\":[\"Apache Spark\"],\"articleSection\":[\"Enterprise Java\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/www.javacodegeeks.com\\\/2016\\\/08\\\/apache-spark-packages-xml-json.html#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.javacodegeeks.com\\\/2016\\\/08\\\/apache-spark-packages-xml-json.html\",\"url\":\"https:\\\/\\\/www.javacodegeeks.com\\\/2016\\\/08\\\/apache-spark-packages-xml-json.html\",\"name\":\"Apache Spark Packages, from XML to JSON - Java Code Geeks\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.javacodegeeks.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.javacodegeeks.com\\\/2016\\\/08\\\/apache-spark-packages-xml-json.html#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.javacodegeeks.com\\\/2016\\\/08\\\/apache-spark-packages-xml-json.html#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.javacodegeeks.com\\\/wp-content\\\/uploads\\\/2014\\\/03\\\/apache-spark-logo.jpg\",\"datePublished\":\"2016-08-25T13:00:31+00:00\",\"description\":\"The Apache Spark community has put a lot of effort into extending Spark. Recently, we wanted to transform an XML dataset into something that was easier to\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.javacodegeeks.com\\\/2016\\\/08\\\/apache-spark-packages-xml-json.html#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.javacodegeeks.com\\\/2016\\\/08\\\/apache-spark-packages-xml-json.html\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.javacodegeeks.com\\\/2016\\\/08\\\/apache-spark-packages-xml-json.html#primaryimage\",\"url\":\"https:\\\/\\\/www.javacodegeeks.com\\\/wp-content\\\/uploads\\\/2014\\\/03\\\/apache-spark-logo.jpg\",\"contentUrl\":\"https:\\\/\\\/www.javacodegeeks.com\\\/wp-content\\\/uploads\\\/2014\\\/03\\\/apache-spark-logo.jpg\",\"width\":150,\"height\":150},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.javacodegeeks.com\\\/2016\\\/08\\\/apache-spark-packages-xml-json.html#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/www.javacodegeeks.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Java\",\"item\":\"https:\\\/\\\/www.javacodegeeks.com\\\/category\\\/java\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Enterprise Java\",\"item\":\"https:\\\/\\\/www.javacodegeeks.com\\\/category\\\/java\\\/enterprise-java\"},{\"@type\":\"ListItem\",\"position\":4,\"name\":\"Apache Spark Packages, from XML to JSON\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.javacodegeeks.com\\\/#website\",\"url\":\"https:\\\/\\\/www.javacodegeeks.com\\\/\",\"name\":\"Java Code Geeks\",\"description\":\"Java Developers Resource Center\",\"publisher\":{\"@id\":\"https:\\\/\\\/www.javacodegeeks.com\\\/#organization\"},\"alternateName\":\"JCG\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.javacodegeeks.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/www.javacodegeeks.com\\\/#organization\",\"name\":\"Exelixis Media P.C.\",\"url\":\"https:\\\/\\\/www.javacodegeeks.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.javacodegeeks.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/www.javacodegeeks.com\\\/wp-content\\\/uploads\\\/2022\\\/06\\\/exelixis-logo.png\",\"contentUrl\":\"https:\\\/\\\/www.javacodegeeks.com\\\/wp-content\\\/uploads\\\/2022\\\/06\\\/exelixis-logo.png\",\"width\":864,\"height\":246,\"caption\":\"Exelixis Media P.C.\"},\"image\":{\"@id\":\"https:\\\/\\\/www.javacodegeeks.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/javacodegeeks\",\"https:\\\/\\\/x.com\\\/javacodegeeks\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/www.javacodegeeks.com\\\/#\\\/schema\\\/person\\\/753b337046402965a8ae99fe80985820\",\"name\":\"Chase Hooley\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/6dd485e1008f57fa77f0f8f014de615e9137600d452688faa07dca83bfa9cbd4?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/6dd485e1008f57fa77f0f8f014de615e9137600d452688faa07dca83bfa9cbd4?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/6dd485e1008f57fa77f0f8f014de615e9137600d452688faa07dca83bfa9cbd4?s=96&d=mm&r=g\",\"caption\":\"Chase Hooley\"},\"sameAs\":[\"http:\\\/\\\/www.mapr.com\\\/blog\"],\"url\":\"https:\\\/\\\/www.javacodegeeks.com\\\/author\\\/chase-hooley\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Apache Spark Packages, from XML to JSON - Java Code Geeks","description":"The Apache Spark community has put a lot of effort into extending Spark. Recently, we wanted to transform an XML dataset into something that was easier to","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.javacodegeeks.com\/2016\/08\/apache-spark-packages-xml-json.html","og_locale":"en_US","og_type":"article","og_title":"Apache Spark Packages, from XML to JSON - Java Code Geeks","og_description":"The Apache Spark community has put a lot of effort into extending Spark. Recently, we wanted to transform an XML dataset into something that was easier to","og_url":"https:\/\/www.javacodegeeks.com\/2016\/08\/apache-spark-packages-xml-json.html","og_site_name":"Java Code Geeks","article_publisher":"https:\/\/www.facebook.com\/javacodegeeks","article_published_time":"2016-08-25T13:00:31+00:00","og_image":[{"width":150,"height":150,"url":"https:\/\/www.javacodegeeks.com\/wp-content\/uploads\/2014\/03\/apache-spark-logo.jpg","type":"image\/jpeg"}],"author":"Chase Hooley","twitter_card":"summary_large_image","twitter_creator":"@javacodegeeks","twitter_site":"@javacodegeeks","twitter_misc":{"Written by":"Chase Hooley","Est. reading time":"4 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.javacodegeeks.com\/2016\/08\/apache-spark-packages-xml-json.html#article","isPartOf":{"@id":"https:\/\/www.javacodegeeks.com\/2016\/08\/apache-spark-packages-xml-json.html"},"author":{"name":"Chase Hooley","@id":"https:\/\/www.javacodegeeks.com\/#\/schema\/person\/753b337046402965a8ae99fe80985820"},"headline":"Apache Spark Packages, from XML to JSON","datePublished":"2016-08-25T13:00:31+00:00","mainEntityOfPage":{"@id":"https:\/\/www.javacodegeeks.com\/2016\/08\/apache-spark-packages-xml-json.html"},"wordCount":446,"commentCount":0,"publisher":{"@id":"https:\/\/www.javacodegeeks.com\/#organization"},"image":{"@id":"https:\/\/www.javacodegeeks.com\/2016\/08\/apache-spark-packages-xml-json.html#primaryimage"},"thumbnailUrl":"https:\/\/www.javacodegeeks.com\/wp-content\/uploads\/2014\/03\/apache-spark-logo.jpg","keywords":["Apache Spark"],"articleSection":["Enterprise Java"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.javacodegeeks.com\/2016\/08\/apache-spark-packages-xml-json.html#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.javacodegeeks.com\/2016\/08\/apache-spark-packages-xml-json.html","url":"https:\/\/www.javacodegeeks.com\/2016\/08\/apache-spark-packages-xml-json.html","name":"Apache Spark Packages, from XML to JSON - Java Code Geeks","isPartOf":{"@id":"https:\/\/www.javacodegeeks.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.javacodegeeks.com\/2016\/08\/apache-spark-packages-xml-json.html#primaryimage"},"image":{"@id":"https:\/\/www.javacodegeeks.com\/2016\/08\/apache-spark-packages-xml-json.html#primaryimage"},"thumbnailUrl":"https:\/\/www.javacodegeeks.com\/wp-content\/uploads\/2014\/03\/apache-spark-logo.jpg","datePublished":"2016-08-25T13:00:31+00:00","description":"The Apache Spark community has put a lot of effort into extending Spark. Recently, we wanted to transform an XML dataset into something that was easier to","breadcrumb":{"@id":"https:\/\/www.javacodegeeks.com\/2016\/08\/apache-spark-packages-xml-json.html#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.javacodegeeks.com\/2016\/08\/apache-spark-packages-xml-json.html"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.javacodegeeks.com\/2016\/08\/apache-spark-packages-xml-json.html#primaryimage","url":"https:\/\/www.javacodegeeks.com\/wp-content\/uploads\/2014\/03\/apache-spark-logo.jpg","contentUrl":"https:\/\/www.javacodegeeks.com\/wp-content\/uploads\/2014\/03\/apache-spark-logo.jpg","width":150,"height":150},{"@type":"BreadcrumbList","@id":"https:\/\/www.javacodegeeks.com\/2016\/08\/apache-spark-packages-xml-json.html#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.javacodegeeks.com\/"},{"@type":"ListItem","position":2,"name":"Java","item":"https:\/\/www.javacodegeeks.com\/category\/java"},{"@type":"ListItem","position":3,"name":"Enterprise Java","item":"https:\/\/www.javacodegeeks.com\/category\/java\/enterprise-java"},{"@type":"ListItem","position":4,"name":"Apache Spark Packages, from XML to JSON"}]},{"@type":"WebSite","@id":"https:\/\/www.javacodegeeks.com\/#website","url":"https:\/\/www.javacodegeeks.com\/","name":"Java Code Geeks","description":"Java Developers Resource Center","publisher":{"@id":"https:\/\/www.javacodegeeks.com\/#organization"},"alternateName":"JCG","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.javacodegeeks.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.javacodegeeks.com\/#organization","name":"Exelixis Media P.C.","url":"https:\/\/www.javacodegeeks.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.javacodegeeks.com\/#\/schema\/logo\/image\/","url":"https:\/\/www.javacodegeeks.com\/wp-content\/uploads\/2022\/06\/exelixis-logo.png","contentUrl":"https:\/\/www.javacodegeeks.com\/wp-content\/uploads\/2022\/06\/exelixis-logo.png","width":864,"height":246,"caption":"Exelixis Media P.C."},"image":{"@id":"https:\/\/www.javacodegeeks.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/javacodegeeks","https:\/\/x.com\/javacodegeeks"]},{"@type":"Person","@id":"https:\/\/www.javacodegeeks.com\/#\/schema\/person\/753b337046402965a8ae99fe80985820","name":"Chase Hooley","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/6dd485e1008f57fa77f0f8f014de615e9137600d452688faa07dca83bfa9cbd4?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/6dd485e1008f57fa77f0f8f014de615e9137600d452688faa07dca83bfa9cbd4?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/6dd485e1008f57fa77f0f8f014de615e9137600d452688faa07dca83bfa9cbd4?s=96&d=mm&r=g","caption":"Chase Hooley"},"sameAs":["http:\/\/www.mapr.com\/blog"],"url":"https:\/\/www.javacodegeeks.com\/author\/chase-hooley"}]}},"_links":{"self":[{"href":"https:\/\/www.javacodegeeks.com\/wp-json\/wp\/v2\/posts\/59706","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.javacodegeeks.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.javacodegeeks.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.javacodegeeks.com\/wp-json\/wp\/v2\/users\/858"}],"replies":[{"embeddable":true,"href":"https:\/\/www.javacodegeeks.com\/wp-json\/wp\/v2\/comments?post=59706"}],"version-history":[{"count":0,"href":"https:\/\/www.javacodegeeks.com\/wp-json\/wp\/v2\/posts\/59706\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.javacodegeeks.com\/wp-json\/wp\/v2\/media\/22307"}],"wp:attachment":[{"href":"https:\/\/www.javacodegeeks.com\/wp-json\/wp\/v2\/media?parent=59706"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.javacodegeeks.com\/wp-json\/wp\/v2\/categories?post=59706"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.javacodegeeks.com\/wp-json\/wp\/v2\/tags?post=59706"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}