Data Insight 3.0.1 scanning cifs.homedir share
I am trying to setup Data Insight 3.0.1 with an artificial share so I can see activity on users home shares from a NetApp filer. I have created the .pb script as mentioned in the admin guide but I am struggling finding the user id to use with the configdb.exe command: Find the user ID of a Data Insight user assigned Product Administrator Server Administrator role from the latest configuration database table in the $DATADIT/conf folder. Is this the user ID of the Server Administrator that I see in the Settings -> Data Insight Users screen? If so how do I get the user ID from Windows? ... also ... Somewhere in the script I am guessing that the line: value: "/CIFS.HOMEDIR" should containg the actual path to the home directories? Thanks folks. DavidSolved3.6KViews0likes3CommentsDQL Report does not output to 1 file 5.0.1
I am using DI 5.0.1 and I am not able to have the DQL report to output to 1 single file. Is anyone else having this issue? I have a DQL query and when I am pulling some of the AD Attributes, it will separate the files. I even have put 'format user AS csv' at the end and it still puts it into multiple files. From user get name, principal_name, login, Home_directory, Department If Home_Directory not in ('nowhere') format user AS CSV; Please help!3.2KViews1like3CommentsData Insight - report.exe high cpu
Our management server is running consistantly at 100% with two instances of report.exe equally consuming 50% each. My quess is that is updating itself from the over night scans??? This is a windows VM with 2 cores, did I undersize the vm? Looking to understand if I've misconfigured something, etc.. Thank you!Solved2.5KViews1like6CommentsSpeeding up console
Is there any configuration changes we can make on the server side to speed up the console such as browsing the workspace? Browsing the workspace is pretty slow such as getting audit data or just getting permission info. The server is not running low on memory or CPU. We're running DI 4.0.Solved2.4KViews1like2CommentsDQL: Sum the Sum of Total Access, Exporting Dashboards
Is there a way to use DQL to demonstrate the sum of the sum of total access for sites? For example, you can generate reports to showthe number of people with access levels ofcreate, read, write, and delete per site, but can you also demonstrate that total per person? We are looking to determine an individual's level of access to each site monitored by DI, summed per level of access. In addition to this, can you export the graphs and dashboards generated to Excel? You can save the raw data in table format, but the graphs would also be helpful. Thanks! Evan1.9KViews1like8CommentsDQL Query to find a specific filename
How do I look for a specific file? Ie what’s the field for “filename” ? Using DQL report, I can find all files with extension “url” What do I change to find a specific file (HELP_DECRYPT.URL) FROM path GET absname, size AS size_bytes, formatdate(created_on,'YYYY/MM/DD HH:MM:SS Z') AS creation_time, formatdate(last_accessed,'YYYY/MM/DD HH:MM:SS Z') AS last_accessed_time, formatdate(last_modified,'YYYY/MM/DD HH:MM:SS Z') AS last_modified_time IF type = "FILE" AND extension IN ("url") SORTBY size DESC1.8KViews0likes1CommentData Insight - backlogged files
We are running Data Insight 4.5, with multiple indexer/collectors doing file scans and Fpolicy across nearly 2 dozen netapps. Is there a 'best method' to address the lag between an event occurring (such as a file being deleted from a filer) and when the event shows up in Data Insight? We are running our indexers/preprocessors at 30min, but our file backlock for most things remains over an hour. We are not hitting much utilization on our collectors (averaging 3% cpu utilization and 40% memory on 64bit 12gb ram 8-core server2008r2). On the performance section, the backlog size is generally 30-100 files, 10mb-20mb.Solved1.4KViews3likes3CommentsData Insight - max results issue
When running reports in Data Insight, we need to expand the number of results in the reports. In another site, Data Insight is configured to have about 99999999999 counts before truncation, because a Conf file was able to be edited. No one recorded what that file was or where, so we can't duplicate this on the new site. This is the portion of the reports we want to increase past the standard limit. "Truncate output if record count exceeds:" Does anyone know the config file I'm referring to, or where it is located?Solved1.3KViews1like3Comments