whitmer high school football

The second edition includes an appendix with a tutorial in CoffeeScript. Written by a computer scientist to teach his own children to program, the book is designed for inductive learning. For this file, batches of 10,000 resulted in 13 files, named output-0.json through output-12.json. (I can’t discuss pending future enhancements, but suffice it to say that we’re exploring options here.) I am now hoping to format the columns so that they are a string of comma separated values without quotation marks (ex. The json_sample_data2 file contains an array with 3 employee records (objects) and their associated dependent data for the employee's children, the children names and ages, cities where the employee has lived and the years . With this book to guide you through all the newest features of SQL, you'll soon be whipping up relational databases, using SQL with XML to power data-driven Web sites, and more! However, if the input string is null , it is interpreted as a VARIANT null value; that is, the result is not a SQL NULL but a real value used to represent a null value in semi-structured formats. Create, develop and manage relational databases in real world applications using PostgreSQL About This Book Learn about the PostgreSQL development life cycle including its testing and refactoring Build productive database solutions and use ... Principal Sales Engineer with Snowflake — US Public Sector. Get more out of Microsoft Power BI turning your data into actionable insights About This Book From connecting to your data sources to developing and deploying immersive, mobile-ready dashboards and visualizations, this book covers it all ... If you have a single JSON text that is 1GB in size or larger, streaming it will allow processing to start much more quickly. Software keeps changing, but the fundamental principles remain the same. With this book, software engineers and architects will learn how to apply those ideas in practice, and how to make full use of data in modern applications. Any help would be greatly appreciated! That makes it too big to be included in a Snowflake COPY statement. Create, optimize, and deploy stunning cross-browser web maps with the OpenLayers JavaScript web mapping library. This book presents an overview on the results of the research project “LOD2 -- Creating Knowledge out of Interlinked Data”. I am trying to parse the data and insert to another table. You can use the (LATERAL) FLATTEN function to extract a nested variant, object, or array from JSON data. Using the docs mentioned by @Nat (Nanigans) and @mark.peters (Snowflake) here a way to do it. You might also want to try using LATERAL FLATTEN too! In this example, you have an array of highly nested objects. Now we need to store these representative JSON documents in a table. I have a JSON array as follows: This Json data is in the field called col1 in my_table Snowflake's native handling of JSON in both READ and WRITE operations is by far and away my favourite feature. Any help would be greatly appreciated! I’ve seen files of 300MB or more compress down to fit in a Snowflake variant.f: the file name of the large JSON file. In JSON, an object (also called a "dictionary" or a "hash") is an unordered set of key-value pairs. Step 2: Load JSON Data. 3 years ago. Browse other questions tagged arrays json snowflake-cloud-data-platform or ask your own question. This utility can run anywhere, so review their downloads page for your specific platform. The . Usually 0, but this allows you to skip a certain number of nodes before writing them out.b: BatchSize. ["Apple", "Banana"]). I have a field/column 'reactions' in a Snowflake table 'tbl'. Water is the fuel that drives the engine of all living beings and must be cared for with love and care. The whole point of this script is to identify the “repeating entities” in the JSON that can be safely extracted and written as a batch into a set of smaller files. The table contains several columns of data and many records. parse_json と to_variant はどちらも文字列を取り、バリアントを返すことができますが、同等ではありません。 次のコードは、 parse_json を使用して1つの列を更新し、 to_variant を使用して他の列を更新します。 (列 variant1 の更新は、同じ関数呼び出しを使用して以前に更新されたため不要です。 The problem with splitting JSON is that it’s not line-based, like CSV files. I’ve been handed massive JSON files and asked to provide detailed analytics on their contents. We use an alternate approach. create or replace table json_example ( v variant ); insert into json_example. Learn where, when, and why the benefits of NoSQL outweigh those of SQL with Joe Celko's Complete Guide to NoSQL. This book covers three areas that make today's new data different from the data of the past: velocity, volume and variety. Takes a VARIANT and an ARRAY value as inputs and returns True if the VARIANT is contained in the ARRAY. if [[ $EndIndex -eq $ArrayLength ]]; then break; fi, StartIndex=$(( $StartIndex + $BatchSize )), if [[ $EndIndex -gt $ArrayLength ]]; then EndIndex=$ArrayLength; fi, ./split_json.sh -c 0 -s 0 -b 10000 -f drug-ndc-2021–01–29.json -t results, Black Ocean weekly report for 28 June – 04 July 2021, Sm Bus Controller Driver Windows 7 Dell Free Download, Zombie Scrum, when you think you are doing Scrum but you do not…, How to translate English to German Text in Node.JS using Deep Learning AI. This is the eBook of the printed book and may not include any media, website access codes, or print supplements that may come packaged with the bound book. Get only salesperson.name from the employees table: --level 2 element: get salesperson.name from the customers table select parse_json (text):salesperson.name as sales_person_name . These functions are used with semi-structured data (including JSON, Avro, and XML), typically stored in Snowflake in Apple, Banana). The json data may have several reaction objects (denoted by 'name') and lists the 'users' which had the reaction(see example array below). ARRAY¶ Used to represent dense or sparse arrays of arbitrary size, where index is a non-negative integer (up to 2^31-1), and values have VARIANT type. Prepare for Microsoft Exam 70-778–and help demonstrate your real-world mastery of Power BI data analysis and visualization. . This second edition of the bestselling Learning XML provides web developers with a concise but grounded understanding of XML (the Extensible Markup Language) and its potential-- not just a whirlwind tour of XML.The author explains the ... Start at any number, and they’ll increment by 1 for each iteration.s: StartIndex. These can then be uploaded into internal or external stages, and loaded into Snowflake using a COPY statement. ARRAY¶ Used to represent dense or sparse arrays of arbitrary size, where index is a non-negative integer (up to 2^31-1), and values have VARIANT type. This book explains how the confluence of these pivotal technologies gives you enormous power, and cheaply, when it comes to huge datasets. Semi-structured Data Functions. Step 1 is to PARSE_JSON, which converts a string into a variant data type formatted as a JSON object. I have a JSON array as follows: This Json data is in the field called col1 in my_table Creating and manipulating arrays and objects. The PARSE_JSON function accepts a string value as input and outputs a VARIANT value. I have columns in Snowflake that appear to be a list of strings (ex. This gives the advantage of storing and querying unstructured data. Here we take our JSON string and insert it into Snowflake using the PARSE_JSON function. The PARSE_JSON function takes a string as input and returns a JSON-compatible variant. For example: Semi-structured Data Functions. The updated edition of this practical book shows developers and ops personnel how Kubernetes and container technology can help you achieve new levels of velocity, agility, reliability, and efficiency. Trending Articles. Step 2 is the lateral flatten. ed.fron.deprecated. Found inside – Page 72The use of the flatten function simplifies the conversion from the JSON array to a relational view, producing 200 ... To demonstrate the JSON parsing capabilities, 72 Loading and Extracting Data into and out of Snowflake How it works. As semi-structured data is being loaded into a Snowflake variant column, the columnarized metadata for the document is captured and stored. SELECT MY_ARRAY_STR. With most of today's big data environments and Put on your artist’s hat, and begin your DIY journey by learning some basic programming and making your first masterpiece with The SparkFun Guide to Processing. The code in this book is compatible with Processing 2 and Processing 3. Number of Views 40.41K. While most JSON documents easily fit into 16MB (compressed), there are situations where this isn’t the case. This book constitutes the thoroughly refereed short papers, workshops and doctoral consortium papers of the 23rd European Conference on Advances in Databases and Information Systems, ADBIS 2019, held in Bled, Slovenia, in September 2019. For this . © 2021 Snowflake Inc. All Rights Reserved. With the — stream option, jq can parse input texts in a streaming fashion, allowing jq programs to start processing large JSON texts immediately rather than after the parse completes. Most databases and data stores only support a single format. The data types of the inputs may vary. For reading JSON I love: The dot notation for addressing JSON elements JSONDoc:Schema:Element::CastThe dot notation for addressing arrays JSONDoc:Schema[0]:"Element"::CastDot notation for nested JSON elements JSONDoc:Schema:NestedSchema:Element::CastLateral flattening of unbounded . TO_JSON and PARSE_JSON are (almost) converse or reciprocal functions. Case 4: Recompose the JSON file after reading line by line. The TO_JSON function takes a JSON-compatible variant and returns a string. Now, If your array elements are always in the same order i.e. type predicates). If the input is NULL, the output will also be NULL. This book is up to date with the latest XQuery specifications, and includes coverage of new features for extending the XQuery language. Note that your JSON has a problem with it. The json_sample_data2 file contains an array with 3 employee records (objects) and their associated dependent data for the employee's children, the children names and ages, cities where the employee has lived and the years . My query attempts to flatten and parse the array to return a row for each object: . The 'reactions' field is a json array. In this volume, contributions from internationally recognized experts describe the latest findings on challenging topics related to grid and cloud database management. Snowflake articles from engineers using Snowflake to power their data. I obtained them from an API whose raw form I applied JSON.stringify() to. Snowflake supports querying JSON columns. How to select JSON data in Snowflake. t: The name of the outer array that contains the repeating nodes. parseJson(fieldName, path) Arguments. Sometimes JSON objects have internal objects containing of one or more fields and without a set structure. These functions are used with semi-structured data (including JSON, Avro, and XML), typically stored in Snowflake in VARIANT, OBJECT, or ARRAY columns. The core ideas in the field have become increasingly influential. This text provides both students and professionals with a grounding in database research and a technical context for understanding recent innovations in the field. A simple dot-notation syntax allows direct SQL access to any element value, at any level in the document. You might also want to try using LATERAL FLATTEN too! This practical guide provides nearly 200 self-contained recipes to help you solve machine learning challenges you may encounter in your daily work. I’m on a Mac OS/X machine, so I used homebrew with the command brew install jq. t: The name of the outer array that contains the repeating nodes. More of a “schema on load” vs. “schema on read”. You can then PUT these into a stage, and execute SELECT or COPY statements against the entire folder. You can use the (LATERAL) FLATTEN function to extract a nested variant, object, or array from JSON data. Prepare for Microsoft Exam 70-779–and help demonstrate your real-world mastery of Microsoft Excel data analysis and visualization. Valid path syntax includes: $ - Root . This book constitutes the thoroughly refereed post-conference proceedings of the 10th TPC Technology Conference on Performance Evaluation and Benchmarking, TPCTC 2018, held in conjunction with the 44th International Conference on Very Large ... Selected as Best Selected as Best. The json_sample_data2 file contains an array with 3 employee records (objects) and their associated dependent data for the employee's children, the children names and ages, cities where the employee has lived and the years . Snowflake does not currently support fixed-size arrays or arrays of elements of a specific non-VARIANT type. The script takes 5 commandline arguments: c: A simple counter used to generate unique output file names. The TO_JSON function takes a JSON-compatible variant and returns a string. Are you asking how to replicate the JSON? path. The functions are grouped by type of operation performed: Parsing JSON and XML data. There is an outermost “wrapper” array element called results. It has a single outer object containing a property with an inner array. TO_JSON and PARSE_JSON are (almost) converse or reciprocal functions. Case 2: File has an outer array. Usage Notes¶. Hi @Step05 . BigQuery is a managed cloud platform from Google that provides enterprise data warehousing and reporting capabilities. Part I of this book shows you how to design and provision a data warehouse in the BigQuery platform. This book covers: Service-level requirements and risk management Building and evolving an architecture for operational visibility Infrastructure engineering and infrastructure management How to facilitate the release management process Data ... It would be desirable to have the contents of these arrays exposed as separate view columns. All this data is available at open.fda.gov, and is provided as a set of zipped JSON files. In JSON, an object (also called a "dictionary" or a "hash") is an unordered set of key-value pairs. What my script does is loop through the file, writing out batches of individual array elements into separate files. Here's how you can query a JSON column in Snowflake. STRIP_OUTER_ARRAY =TRUE removed from FILE_FORMAT. ' {. var array_stmt = snowflake.createStatement({sqlText:array_query}); var array_res = array . Case 3: File has an outer array. The functions are grouped by type of operation performed: Parsing JSON and XML data. For example, here’s an excerpt from one of those openFDA files I mentioned earlier: You can see the nested structure of the file here. Snowflake does not currently support fixed-size arrays or arrays of elements of a specific non-VARIANT type. I have a field/column 'reactions' in a Snowflake table 'tbl'. Rather than serializing these formats into one long string and forcing the developer to parse them apart at query time, or requiring a complex pre-loading transformation, Snowflake allows them to be stored in their native format, directly in the database. But Snowflake supports JSON and other semi-structured data natively alongside relational data. Here's an example of the raw value of the array: . This updated second edition provides guidance for database developers, advanced configuration for system administrators, and an overview of the concepts and use cases for other people on your project. Architecture of a Database System presents an architectural discussion of DBMS design principles, including process models, parallel architecture, storage system design, transaction system implementation, query processor and optimizer ... Determining the data type for values in semi-structured data (i.e. I am now hoping to format the columns so that they are a string of comma separated values without quotation marks (ex. Suppose you have JSON files named json_sample_data and json_sample_data2 that you would like to parse using Snowflake SQL. This book covers the best-practice design approaches to re-architecting your relational applications and transforming your relational data to optimize concurrency, security, denormalization, and performance. The field containing the JSON object that you want to parse. I have columns in Snowflake that appear to be a list of strings (ex. FROM X. ; Remember, semi-structured data is natively supported (as first class data types) in Snowflake! Automating Snowflake's Semi-Structured JSON Data Handling. With Snowflake, users can choose to "flatten" nested objects into a relational table or store objects and arrays in their native format within Snowflake's Variant data type. But, (there’s always a “but”…), Snowflake variant columns are currently limited to a gzip-compressed 16MB of data. select parse_json (. The json_sample_data2 file contains an array with 3 employee records (objects) and their associated dependent data for the employee's children, the children names and ages, cities where the employee has lived and the years . Case 1: File doesn't have an outer array. Why does this book look so different? Based on the latest research in cognitive science and learning theory, Head First Learn to Code uses a visually rich format to engage your mind, rather than a text-heavy approach that puts you to sleep. Serving as a road map for planning, designing, building, and running the back-room of a data warehouse, this book provides complete coverage of proven, timesaving ETL techniques. I obtained them from an API whose raw form I applied JSON.stringify() to. Suppose you have JSON files named json_sample_data and json_sample_data2 that you would like to parse using Snowflake SQL. This blog post presents a technique for automatically building database views based on the structure of JSON data stored in Snowflake tables. The json data may have several reaction objects (denoted by 'name') and lists the 'users' which had the reaction(see example array below). This book teaches you to design and implement robust data engineering solutions using Data Factory, Databricks, Synapse Analytics, Snowflake, Azure SQL database, Stream Analytics, Cosmos database, and Data Lake Storage Gen2. One possible enhancement would be to explore the streaming option of jq. VARIANT, OBJECT, or ARRAY columns. create or replace table json_example ( v variant ); insert into json_example. Number of Views 5.66K. Load a sample JSON document using a simple INSERT statement and Snowflake's PARSE_JSON function. It's a real time-saver, and you'll find the complete code plus a usage example at the end of the second part of this blog post. notation as in the other side of the select statement. fieldName. The book is full of C# code samples and tips to help you squeeze every bit of juice from your application—lower memory utilization, consistent CPU usage, and fewer I/O operations across the network and disk. Sometimes JSON objects have internal objects containing of one or more fields and without a set structure. One of Snowflake’s key differentiating features is our native support for semi-structured data formats, including JSON, XML, Parquet, ORC, and AVRO. Parse text as a JSON document using the PARSE_JSON function. However, if the input string is null , it is interpreted as a VARIANT null value; that is, the result is not a SQL NULL but a real value used to represent a null value in semi-structured formats. , MY_ARRAY [ 2] AS ARRAY_ELEM_2_FROM_ARRAY. Expand Post. JQ is the Swiss Army knife of JSON utilities, and this article will present a commandline script that uses jq to split JSON files into Snowflake-sized chunks. This book is your complete guide to Snowflake security, covering account security, authentication, data access control, logging and monitoring, and more. This is the practical book with a large number of examples that will show you how various design and implementation decisions affect the behavior and performance of your systems. The path to the data element you want to parse from the JSON object. You might also want to try using LATERAL FLATTEN too! One solution is to split the file into smaller chunks prior to executing the PUT and COPY statements. But with your provided solution as long as he can do an inner select within the PARSE_JSON function or find a way to do the select and pass the value as a variable to the PARSE_JSON() function it will work. When I try to parse an array value, am not able to get the array brackets out. These functions are used with semi-structured data (including JSON, Avro, and XML), typically stored in Snowflake in VARIANT, OBJECT, or ARRAY columns. STRIP_OUTER_ARRAY =TRUE set in FILE_FORMAT. That exercise is “left to the user”…. In JSON we call the items key value pairs, like: {"key": "value"}. Using the docs mentioned by @Nat (Nanigans) and @mark.peters (Snowflake) here a way to do it. The focus is on the programming process, with special emphasis on debugging. The book includes a wide range of exercises, from short examples to substantial projects, so that students have ample opportunity to practice each new concept. Arrays - JSON documents can contain both simple and object arrays, and the existing stored procedure simply returns these as view columns of datatype ARRAY. What I’ve written is a bash shell script that reads in the larger file and spits it out in smaller chunks, keeping the integrity of the JSON structure intact. There are over 135,000 of these array elements in this one file, which itself is over 1 GB. 入力文字列を JSON ドキュメントとして解釈し、 VARIANT 値を生成します。, 入力が NULL の場合は、出力も NULL です。ただし、入力文字列が 'null' の場合は、 JSON null 値として解釈されるため、結果は SQL NULL ではなく、 null を含む有効な VARIANT 値になります。例は、下記の例セクションに含まれています。, 10進数を解析するとき、 PARSE_JSON は、123.45を DOUBLEではなく NUMBER(5,2)として処理することにより、表現の正確さを保持しようとします。ただし、科学表記法を使用した数値(例: 1.2345e +02)、または、範囲またはスケールの制限により固定小数点として保存できない数値は DOUBLE として保存されます。JSON は TIMESTAMP、 DATE、 TIME、または BINARY などの値をネイティブに表さないため、これらは文字列として表される必要があります。, JSON 内では、オブジェクト(別名「ディクショナリ」または「ハッシュ」)は、キーと値のペアの 順序付けられていない セットです。, PARSE_JSON 関数は入力として文字列を取り、 JSON互換のバリアントを返します。, '{"pi":3.14,"e":2.71}' = TO_JSON(PARSE_JSON('{"pi":3.14,"e":2.71}')), TO_JSON によって生成される文字列内のキーと値のペアの順序は予測できません。, TO_JSON によって生成された文字列には、 PARSE_JSON に渡される文字列よりも少ない空白を含めることができます。, これは、 PARSE_JSON を呼び出して文字列を解析することにより、 VARIANT 列にさまざまな型のデータを保存する例を示しています。, テーブルを作成して入力します。 INSERT ステートメントは PARSE_JSON 関数を使用します。, 次の例は、 PARSE_JSON と TO_JSON の NULL の処理を示しています。, 次の例は、 PARSE_JSON、 TO_JSON、 TO_VARIANT の間の関係を示しています。, テーブルを作成し、 VARCHAR、汎用 VARIANT、 JSON 互換の VARIANT データを追加します。INSERT ステートメントは VARCHAR 値を挿入し、 UPDATE ステートメントはその VARCHAR に対応する JSON 値を生成します。, このクエリは、 TO_JSON と PARSE_JSON が概念的に逆関数であることを示しています。. The 'reactions' field is a json array. This book explains in detail how to use Kettle to create, test, and deploy your own ETL and data integration solutions. TO_JSON and PARSE_JSON are (almost) converse or reciprocal functions. Specifically, this book explains how to perform simple and complex data analytics and employ machine learning algorithms. With this practical guide, you'll learn how to conduct analytics on data where it lives, whether it's Hive, Cassandra, a relational database, or a proprietary data store. The TO_JSON function takes a JSON-compatible variant and returns a string. Product update from reMarkable; Two software updates launching back to back. Suppose you have JSON files named json_sample_data and json_sample_data2 that you would like to parse using Snowflake SQL. The functions are grouped by type of operation performed: Creating and manipulating arrays and objects. SELECT y.value:item:postScore::varchar FROM x, LATERAL FLATTEN (input=>parse_json (str):list) y; You can do this with parse_json and a flatten. It looks very similar to your standard INSERT command, but instead of VALUES we're using SELECT in combination with a Snowflake helper function PARSE_JSON to convert the JSON string into a JSON object that the VARIANT type can accept. Using the docs mentioned by @Nat (Nanigans) and @mark.peters (Snowflake) here a way to do it. Executing queries against that semi-structured variant column is then extremely easy. Drop it in the same folder with the large JSON file you need to split. ただし、関数は厳密には逆数ではありません。空白またはキーと値のペアの順序が違うときは、出力が入力と一致しない場合があります。例: PARSE_JSON と TO_VARIANT はどちらも文字列を取り、バリアントを返すことができますが、同等ではありません。次のコードは、 PARSE_JSON を使用して1つの列を更新し、 TO_VARIANT を使用して他の列を更新します。(列 variant1 の更新は、同じ関数呼び出しを使用して以前に更新されたため不要です。ただし、列を更新するために呼び出される関数を並べて確認できるように、以下のコードで再度更新します)。, 以下のクエリは、 PARSE_JSON の出力と TO_VARIANT の出力が同じでないことを示しています。空白の些細な違いに加えて、引用符には大きな違いがあります。, © 2021 Snowflake Inc. All Rights Reserved, TO_JSON(PARSE_JSON('{"pi":3.14,"e":2.71}')), ---+------------------------+------------+, | N | V | TYPEOF(V) |, |---+------------------------+------------|, | 1 | null | NULL_VALUE |, | 2 | NULL | NULL |, | 3 | true | BOOLEAN |, | 4 | -17 | INTEGER |, | 5 | 123.12 | DECIMAL |, | 6 | 1.912000000000000e+02 | DOUBLE |, | 7 | "Om ara pa ca na dhih" | VARCHAR |, | 8 | [ | ARRAY |, | | -1, | |, | | 12, | |, | | 289, | |, | | 2188, | |, | | false, | |, | | undefined | |, | | ] | |, | 9 | { | OBJECT |, | | "x": "abc", | |, | | "y": false, | |, | | "z": 10 | |, | | } | |, ---------------+--------------------------+------------------+--------------------+, | TO_JSON(NULL) | TO_JSON('NULL'::VARIANT) | PARSE_JSON(NULL) | PARSE_JSON('NULL') |, |---------------+--------------------------+------------------+--------------------|, | NULL | "null" | NULL | null |, -------------+----------------------+--------------+-------------------+---------------------------------+------------------------------+, | VARCHAR1 | PARSE_JSON(VARCHAR1) | VARIANT1 | TO_JSON(VARIANT1) | PARSE_JSON(VARCHAR1) = VARIANT1 | TO_JSON(VARIANT1) = VARCHAR1 |, |-------------+----------------------+--------------+-------------------+---------------------------------+------------------------------|, | {"PI":3.14} | { | { | {"PI":3.14} | True | True |, | | "PI": 3.14 | "PI": 3.14 | | | |, | | } | } | | | |, --------------------------------------+--------------------------------------------------------+--------------------------------------------------------+, | TO_JSON(PARSE_JSON('{"B":1,"A":2}')) | TO_JSON(PARSE_JSON('{"B":1,"A":2}')) = '{"B":1,"A":2}' | TO_JSON(PARSE_JSON('{"B":1,"A":2}')) = '{"A":2,"B":1}' |, |--------------------------------------+--------------------------------------------------------+--------------------------------------------------------|, | {"a":2,"b":1} | False | True |, --------------+-----------------+---------------------+, | VARIANT1 | VARIANT2 | VARIANT1 = VARIANT2 |, |--------------+-----------------+---------------------|, | { | "{\"PI\":3.14}" | False |, | "PI": 3.14 | | |, | } | | |, DATABASE_REFRESH_PROGRESS , DATABASE_REFRESH_PROGRESS_BY_JOB, SYSTEM$DATABASE_REFRESH_PROGRESS , SYSTEM$DATABASE_REFRESH_PROGRESS_BY_JOB, SYSTEM$ESTIMATE_SEARCH_OPTIMIZATION_COSTS, SYSTEM$USER_TASK_CANCEL_ONGOING_EXECUTIONS, TRY_TO_DECIMAL, TRY_TO_NUMBER, TRY_TO_NUMERIC. Here we add a where clause, using the same colon (:) and dot (.) This script works by streaming the entire file into memory on your local machine, and then splitting it out from there.

Printable Classroom Math Games, Conversion Disorder Treatment Centers, Positive Feedback Comment, Mini Cocker Spaniel Puppies For Sale Near Illinois, Eddie Anthony Ramirez Biography, Fishing Calendar 2020, Belgian Malinois Puppy For Sale Uk, Dallas College Room And Board, Coco Chanel Earrings Gold,

whitmer high school football