You need to remove the header, reverse the lines and then add the header back in. If you are doing this for event generation, then this isn’t the end of the line. However, it’s all formatted in the CSV file. But 12 minutes later, I have a 369Mb file with well over 500,000 lines of data. The numbers update during the process telling you how much has been downloaded and the speed at which the data is coming across. If you include the -o, then you get a nicely formatted display of progress. If you leave off the -o option you will get the output streamed to your console – given the amount of data you are grabbing, this is not optimal. The REST endpoint we are going to use is the /search/jobs/export endpoint, and you use it like this: curl -k -u admin:mypassword -data-urlencode search='search index=mssharepoint sourcetype=MSSharePoint:2013:ULSAudit host=SP-APP01 | table index,host,source,sourcetype,_raw' -d output_mode=csv -d -d -o sp-app01.csv I want to store the results of this search into a file called sp-app01.csv. Note that I am explicitly setting the fields I want and putting the results into a table. So here is my search: index=mssharepoint sourcetype=MSSharePoint:2013:ULSAudit host=SP-APP01 | table index,host,source,sourcetype,_raw In my case, I want a particular sourcetype for one day – let’s say 2 days ago. The Invoke-WebMethod is not the same thing at all!) So how do we do this? First off, figure out your search. (Note, if you are doing this in PowerShell, you will need to remove the alias for curl, which uses Invoke-WebMethod instead. This is standard issue on Linux systems, but there are downloads available for Windows as well, which is my platform of choice. It’s also great for these larger export jobs and automation. This is great for developers (and if you are one of these developers, then head over to for information on our SDK interfaces for. One of these is the RESTful interface to the backend of the search head. However, the production data is in California and I was in Australia.įortunately, we do have tools available to do this. One days log can be several hundred megabytes of data. In my case, this was a ULS log set from a SharePoint farm. However, we have an event generator that allows us to replay log files into our test environment so that we have a large data set to work with. When developing a new app, we don’t work on production data. I recently bumped into this problem myself while working on a new app. You really don’t want to log on to the Splunk server to get it either. However, there are times when such a large export is required. It’s a large enough result set that most people want to keep it in Splunk for analysis. When you open the file, you see 50,000 rows. So you click on the Export button and download the results to CSV. The results are hundreds of thousands of rows, which is good. So you do the search you want and create the table you want in the search app.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |