Hello, I’m Mike Combs and I’m the VP of Marketing at Veristorm I’ll show you better data integration with System z Connector for Hadoop. You can copy enterprise data into Hadoop and other big data applications. First, you start the app through the web. On the left side you can see data sources including System z mainframe and JDBC sources. There’s the Players file. We’re going to copy that one. It’s a VSAM file, and I just click copy. A wizard comes up that will walk us through this. First, I pick the format like pipe separators, JSON, or CSV. Then I select the target It’s going to be Hive But I could pick Other Hadoop platforms like Cloudera, HortonWorks or BigInsights I can pick NoSQL databases like MongoDB or any JDBC targets. Now I need to select the Copybook. We support COBOL Copybooks and in PL/1. The data is parsed automatically. You can see the field names, the lengths and types. And you can choose which fields to copy and which to filter out. Large files can be split and streamed in parallel for better throughput. We’re 4X faster than the competition. You can schedule this for later, or to repeat daily or monthly, or even every 5 minutes. Or you can save the file, to use with other schedulers. To run immediately, just click Finish. Now we can see the job has started and we can watch it run. And it’s done! Now let’s see the output. I drill down to Hive. Or I can click browse and view the data down here. now let’s try DB2. We’ll move this stock exchange data, which we can browse here. And then copy. Again, we’re going to move it to Hive. I can select columns to filter. And I can filter rows with logical formulas. I can use the built-in scheduler, or save it for an external scheduler. Or click Finish to run it now. There’s our new job, It’s a much bigger file to run. Now we can view the output. Click browse to see the data. So that’s it, point-and-click. No staging costs, no conversion MIPS. Please visit our website to learn more. Thank you.