| The rise of ransomware can be attributed to several factors. First, world conflict has certainly led to intensified cyber-attacks and the exponential growth of digital communication and storage has created a vast landscape of potential targets.
Second, the increasing sophistication of ransomware attacks, often involving social engineering techniques, has made it easier for attackers to gain access to sensitive information and the means to force payment through cryptocurrencies has made it easier for attackers to remain anonymous and avoid detection. Ransomware attacks are usually carried out through phishing emails, infected downloads, or vulnerabilities in outdated software. Once the ransomware is installed on the victim's device, it can spread to other data on the same network, making it a particularly dangerous threat to the enterprise. Ransomware attacks can have severe consequences for victims. In addition to the loss of data, companies may face significant financial losses and reputational damage. Governments may also experience disruptions in critical infrastructure, leading to potential public safety concerns. According to the IBM and the Ponemon Institute, the average data breach cost in 2022 was 4 million dollars USD. Preventing and mitigating ransomware attacks requires a multi-pronged approach. Companies and individuals should take steps to secure their networks and data, including implementing robust password policies, regularly backing up data, and using antivirus software. In addition, education and awareness campaigns can help to reduce the risk of falling victim to social engineering techniques used in ransomware attacks. One of the most effective means to protect your sensitive data is to use a data at rest encryption for your storage mediums such as file servers and cloud storage locations. In addition, using a process known as sharding can provide excellent resiliency to protect your data by dividing it into smaller parts called shards. Each shard will only contain a portion of that file/data, so that even if one shard is compromised, the entire file process provides resiliency from ransomware and data exposure by the bad guys. This process also uses encryption keys employing strong encryption algorithms such as AES-256 to encrypt the sensitive data before storing it in the storage medium thus ensuring that only authorized users can access the data. Together, data at rest encryption and sharding can provide a high level of security for sensitive data. This ensures that even if the data is stolen, it will be unreadable to unauthorized users and thus personally identifiable information (PII) can't be shared to the world. This also helps organizations greatly improve their compliancy requirements for data protection laws such as GDPR, POPIA, CCPA/CPRA, etc. STEP 1 should include the use of a technology that provides at rest encryption for your sensitive data significantly minimizing the bad guy's access to your organizations crown jewels. Coupled with a technology that provides sharding which by its nature provides point-in-time recovery; this assures productivity even in what may seem the worst of times. Make this part of your Zero Trust approach.
|
| Using the Sensitive Data Manager Agent version 11.8.2 or later provides support for SharePoint on premise and online Modern Authentication. This is a more secure way than using Basic Authentication. The following Microsoft guide provides guidance to setup the SharePoint app-only principal to use Modern Authentication with other technology integrations.
You will first need to grant access using SharePoint App-Only by following either the Microsoft documentation or by the following.
Navigate to your SharePoint site using the following URL to generate the required Client Id and Client Secret - https://contoso.sharepoint.com/_layouts/15/appregnew.aspx
- Click both the Generate buttons to create the Client Id and Client Secret
- Add the appropriate information in the following fields.
Example Title: SharePoint App
App Domain: coryretherford.com
Redirect URI: https://coryretherford.com
 Note*
- If you receive a SharePoint permissions access error when accessing the https://contoso.sharepoint.com/_layouts/15/appregnew.aspx URL then use the below URL instead.
- https://contoso-admin.sharepoint.com/_layouts/15/appregnew.aspx
As indicated in the below image:
- Input the generated Client Id: into the Spirion Identity Provider Id field.
- Input the generated Client Secret: into the Spirion Password field.
- *Note – the username is not needed and will result in error if added.
 |
| The US Department of Defense (DoD) released the Cybersecurity Maturity Model Certification (CMMC) on January 31, 2020 as a unified standard for implementing cybersecurity across the defense industrial base and includes over 300,000 companies. The CMMC is the DoD's response to the significant number of compromises of sensitive CUI data that contained defense information located on contractors' information systems. In order for contractors to be eligible for DoD contract awards they are required to have the CMMC certification.
Contractors are responsible for implementing and monitoring their information technology systems and any sensitive DoD information stored on those systems. The CMMC framework guides companies with the appropriate levels of cybersecurity practices and processes to protect Federal Contract Information (FCI) and Controlled Unclassified Information (CUI) within their unclassified networks.
The CMMC consists of five certification levels to best implement cybersecurity based practices.
- CMMC Level 1. Basic best cyber hygiene practices, sensitive data management.
- CMMC Level 2. Protect Controlled Unclassified Information (CUI)
- CMMC Level 3. Practices to safeguard CUI, including the NIST 800-171
- CMMC Level 4. Practices using advanced persistent threats (APT) techniques and procedures
- CMMC Level 5. In place sophisticated capabilities to detect and respond to APTs
Controlled Unclassified information (CUI), is information that government agencies and some of their contractors are required to both mark and classify within their data stores. CUI represents a particular kind of sensitive data created by the U.S. federal government or developed on its behalf and merits special protection against exposure.
As result of the CMMC and the contractual agreements between contractors and the DoD, assessors must understand the contractors response capabilities by knowing which systems store CUI data that may not be within policy. When it comes time to prove that CMMC controls are in place, you must be able to audit your systems, generate comprehensive reports, and review audit reports in detail. To do so will require a robust and accurate vended data discovery toolset.
Avoid Loss of DoD Contracts
A typical government contract is around $250,000 and without this certification there is substantial risk losing contracts. To reduce the loss of contracts and/or potential for a data breach as they relate to data that contain CUI, DFARS 7012, NIST 800-171/172 and the CMMC, it's necessary to identify the locations which store sensitive data assets processing Controlled Unclassified Information (CUI).
Conducting regular CUI risk or breach damage assessments is time intensive and doing so manually is not attainable. It's necessary to use an industry trusted data discovery tool that provides the necessary technologies to accurately locate common types of PII and CUI. These automated tools reduce the overall time spent locating documents with common categories or markings that may be in scope of the CMMC.
The U.S. governments rule for protecting CUI includes marking documents (classifying) to indicate the protected status. The National Archives Records Administration (NARA) issued a handbook on marking best practices in 2016 and cites the proper organizational markings and categories to consider when looking for CUI.
- Categories
- Banner Marking: Specified Authorities
- Category Marking
- Organizational Index Grouping
CMMC compliance will help reduce the potential loss of contracts. Using discovery tools to accurately locate these types of data are core to the CMMC. Before the concept of CUI was introduced in 2008, documents that contained sensitive defense information such as schematics, reports, and other technical data were marked with an array of acronyms that were indicative of its protected status, such as For Official Use Only (FOUO) and Sensitive But Unclassified (SBU). However since the introduction and executive order, NARA was put in charge to better facilitate standards across the DoD.
The Right Tool for the CMMC
Maintaining a good alignment with the CMMC is about using the right set of tools; there is no one single security tool that can do it all. Spirion is one such tool that identifies both PII and CUI across structured and unstructured data by searching text and images for common PII or searching for phrases, words, and acronyms that are indicative of CUI. This toolset is fundamental to assisting the compliance with the CMMC via its data discovery and classification capabilities.
Success will be achieved through accurate and automated process's to identify and classify sensitive data as it relates to the CMMC such as CUI. By conducting regular CUI risk assessment throughout the business's information ecosystem, the implementation of data classification policy by imbedding labels into documents and files will help delineate their sensitivity and facilitate the protection of unauthorized and unintended transfers and publication of CUI. |
| Protecting sensitive data is a challenging task. Between the complexities of the data itself and the legal implications surrounding an alphabet soup of data privacy regulations, too many organizations struggle to develop protection strategies. Visibility of the data is one of areas that is most difficult to accomplish, yet vital to meet compliance.
For organizations that accept credit card payments, compliance with the Payment Card Industry Data Security Standard (PCI DSS) is a must. "Maintaining payment security is required for all entities that store, process or transmit cardholder data," the PCI Security Standards Council explained. PCI DSS "set the technical and operational requirements for organizations accepting or processing payment transactions, and for software developers and manufacturers of applications and devices used in those transactions."
The PCI Security Standards demonstrate that data discovery is foundational, core to the assessment for a PCI audit. There are twelve requirements all designed to put protection of consumer PII first. The requirements include a multitude of security controls placed on those devices storing sensitive PCI data.
The Cost of non-Compliance
PCI compliance continues to be a challenge, only 27.9% of organizations achieving 100% compliance during their interim compliance validation, according to the Verizon 2020 Payment Security Report. Compliance should not be seen as "checkbox" activity but rather an everyday activity to protect sensitive data.
The average cost of a data breach is $3.86 million, according to the IBM and Ponemon Institute Cost of a Data Breach 2020 report. When consumer PII, the very data PCI DSS is designed to protect, is compromised, it will cost a company $150 per record in the breach. Data breaches also result in a loss of reputation and consumer confidence. Consumers don't like having credit cards replaced regularly because a company failed at protecting sensitive information, and they will take their business elsewhere. According to the Deloitte Global Survey on Reputation Risk, on average 25% of a company's market value is directly attributable to its reputation, loss of revenue, and the impact of not being able to process payment card transactions.
It's not just data breaches and reputational loss as result of that cost for failing PCI compliance. Companies not meeting regulations are fined thousands of dollars each month of non-compliance. There are also legal costs to consider during the remediation processes and the inability to process payment card transactions.
PCI compliance comes at a cost. The size and scope of your organization, the overall security posture of the company, and whether or not there is dedicated staff handling PCI compliance will all factor into the cost of setting up and maintaining mechanisms for PCI standards.
Why Accurate Audits Matter
PCI audits can be costly, because they require the company having the right process and tools in place. Audits are time consuming and stressful for your security and data privacy teams, but they are vital to protecting both the company and customers. Knowledge of which devices store and process sensitive data is vital to reducing PCI costs, as well reducing the potential of breaches because your systems continuously track and "know" the location of sensitive data. Nothing is left unknown.
Accuracy matters when it comes to being able to identify where your PCI data really lives. Not being able to accurately discovery PCI data will impact your overall assessment and add costs to the process. Organizations must have the ability to demonstrate to the auditors (QSA) that data was not located on devices outside the scope of PCI. A PCI audit must validate that the perceived scope of compliance is in fact accurately defined and documented.
Organizations shouldn't view a PCI audit as a point-in-time process, but as an ongoing exercise that demonstrates governance of cardholder data throughout the entirety of the data lifecycle.
Regulations like PCI DSS are designed to protect data privacy, which in turn goes a long way in preventing data breaches. Maintaining awareness of where PCI data resides is crucial to maintaining good consumer privacy practices. While you need to invest upfront with the right data management systems and whatever security tools are needed for compliance, being PCI compliant will pay off in the long run. |
| The security of this directory server can be significantly enhanced by configuring the server to enforce validation of Channel Binding Tokens received in LDAP bind requests sent over LDAPS connections. Even if no clients are issuing LDAP bind requests over LDAPS, configuring the server to validate Channel Binding Tokens will improve the security of this server.
For more details and information on how to make this configuration change to the server, please see https://go.microsoft.com/fwlink/?linkid=2102405.
How to set the client LDAP signing requirement by using a domain Group Policy Object
- Select Start > Run, type mmc.exe, and then select OK.
- Select File > Add/Remove Snap-in.
- In the Add or Remove Snap-ins dialog box, select Group Policy Object Editor, and then select Add.
- Select Browse, and then select Default Domain Policy (or the Group Policy Object for which you want to enable client LDAP signing).
- Select OK.
- Select Finish.
- Select Close.
- Select OK.
- Select Default Domain Policy > Computer Configuration > Windows Settings > Security Settings > Local Policies, and then select Security Options.
- In the Network security: LDAP client signing requirements Properties dialog box, select Require signing in the list, and then select OK.
- In the Confirm Setting Change dialog box, select Yes.
|
| PAN is a ten-digit unique alphanumeric number issued by the Income Tax Department. The primary purpose of the PAN is to bring a universal identification to all financial transactions and to prevent tax evasion by keeping track of monetary transactions, especially those of high-net-worth individuals who can impact the economy.
Structure of PAN
The PAN (or PAN number) is a ten-character long alpha-numeric unique identifier.
Example: AAAPZ1234C
- The first three characters of the code are three letters forming a sequence of alphabets letters from AAA to ZZZ
- The first five characters are letters in uppercase, followed by four numerals, and the last (tenth) character is a letter.
Fourth character [P — Individual or Person ] identifies the type of holder of the card. Each holder type is uniquely defined by a letter from the list below:
- A — Association of persons (AOP)
- B — Body of individuals (BOI)
- C — Company
- F — Firm
- G — Government
- H — HUF (Hindu undivided family)
- L — Local authority
- J — Artificial juridical person
- P — Individual or Person
- T — Trust (AOP)
The fifth character of the PAN is the first character of either:
- Surname or last name of the person, in the case of a "personal" PAN card, where the fourth character is "P" or
- of the name of the entity, trust, society, or organization in the case of a company/HUF/firm/AOP/trust/BOI/local authority/artificial judicial person/government, where the fourth character is "C", "H", "F", "A", "T", "B", "L", "J", "G".
- The last (tenth) character is an alphabetic digit used as a check-sum to verify the validity of that current code.
|
| Introduction
On occasion it may not be possible to simply classify based solely on the actual content of which exists in a file. The scenario arises when all files in a folder may need to be classified as HR, Finance, etc. so that data can be tracked (tagged) back to the source location from which it originated. To persistent classify all these files in a folder the following procedures overview these steps.
Requirements
This process require the following prerequisites in which to make this process possible. These include:
- Current versions of the SDM product(s).
- All Files "SDD".
- "Workflow" rule.
- Search "Policy".
Important Notes
Note that proceeding with the following "could" override the true content based classifications for any file in these respective locations.
Process
STEP 1 (SDD)
- Import or create a RegEx to capture all files in a location using the expression.
From within the Spirion SDM Console Admin > Senstive Data Types > Add > Select Data Type = Regular Expression > Name = Classify All Files > Expression = (\S|\s|\w|\d)*
STEP 2 (Workflow)
- Import or create a new workflow from within the Spirion SDM Console Admin > Classification > Add > Name = XXXX.
Select optionally a color, icon, or weight.
- Click OK.
Highlight the new Classification, from within the Spirion ribbon Select Rule > Add.
Workflow Rule
- Provide a workflow rule name and other options
Definition
Select from the dropdown options the following, see the image below.
- "Location", "Contains", Path to folder which all files from within should be classified as XXXX.
Endpoints
- Select the appropriate endpoint from where the local files are being searched.
Actions
- The Classification tag will be automatically selected, if not from the drop down for "Classify results as:" select the XXXX classification name from step 3. a. i.
- Select from the "Execute classification rules:" "Directly on the endpoint".
Click Finish
- Bottom right area of the page to save the new rule.
STEP 3 (Search Policies)
- Import or create a new workflow from within the Spirion SDM Console Admin > Policies.
Create a new Policy by clicking Policy > Create.
On the Policy Tab provide a name and optionally a description.
- For Policy Type select Scheduled Task.
- On the Endpoints Tab select the same endpoint chosen for the workflow created previously.
- On the Data Types Tab deselect all.
- On the Location Tab deselect all.
- Click Finish
Select the policy name just created > expand the tree view > expand Search Locations and Select Custom Folders.
- Click Add from the the Ribbon
- In the new Folder Location field place the same Share name as done in workflow, for example (C:\Location\PoC_Test_Data).
- Select "Include in Search" to the right of the folder patch for the Scope.
- Click the green check mark to the left of the folder path to save the changes.
Select the policy name just created > expand the tree view > Select Sensitive data Types.
- From within the resulting list select "Classify All Files" as created in this document "Step 1, 2.".
- Select the policy name just created > expand the tree view > Select "Scheduled Tasks" or "Search > Initiate Search" to search and classify all files from within the target folder location.
Outcomes
As result of the following detailed procedural steps all files from within a folder will be classified as XXXX so that data can be tracked (tagged) back to the source location from which it originated. |
| In this blog I explain the numerous ways to identify and validate a credit card number (CCN). The main point in this posting is to articulate the complex nature of identifying sensitive data. The complexities of identifying these types of sensitive data manually are not practical thus the need to automate using tools such as "Spirion.com".
Validation of CCN
Below are techniques that can be used to perform cursory checks on CCN's and an explanation of each of the most common validation techniques.
Luhn Algorithm Check
The Luhn Algorithm is a simple checksum formula used to validate a variety of identification numbers, such as credit card numbers and numerous others such as:
- IMEI numbers
- US National Provider Identifier (NPI) numbers
- Canadian Social Insurance Numbers
- Israeli ID Numbers
- South African ID Numbers
- Greek Social Security Numbers (ΑΜΚΑ)
- McDonald's survey codes
- Taco Bell receipts
- Tractor Supply Co. receipts
In addition most credit cards and government identification numbers use this algorithm as a simple method of distinguishing valid numbers from mistyped or otherwise incorrect numbers.
Major Industry Identifier
The first digit of a credit card number represents the category of entity which issued the card.
Issuer identification number
The first six digits of a card number identify the institution that issued the card to the card holder.
Personal Account Number
Digits 7 to final number minus 1 (the last is the checksum) indicate the individual account identifier.
How many digits in a Credit Card Number?
- Visa and Visa Electron: 13 or 16
- MasterCard: 16
- Discover: 16
- American Express: 15
- Diner's Club: 14 (including enRoute, International, Blanche)
- Maestro: 12 to 19 (multi-national Debit Card)
- Laser: 16 to 19 (Ireland Debit Card)
- Switch: 16, 18 or 19 (United Kingdom Debit Card)
- Solo: 16, 18 or 19 (United Kingdom Debit Card)
- JCB: 15 or 16 (Japan Credit Bureau)
- China UnionPay: 16 (People's Republic of China)
|
| To import a MSSQL .bak file from another location into an Amazon AWS RDS MSSQL instance you must follow these instructions, there is currently no other option for RDS MSSQL.
Create an S3 and verify the RDS MSSQL instance can access, this could be accomplished by modifying the VPC appropriately or granting public access to the S3. After this verify you can connect to the instance on port 1433 using SSMS or using telnet to the instance name such as the way mine looked:
- nameofdb.b6rfjiaj2jhj.us-east-1.rds.amazonaws.com
When assessable run the following query within SSMS to import the .bak from S3 into RDS MSSQL.
exec msdb.dbo.rds_restore_database @restore_db_name='DBNAME.bak', @s3_arn_to_restore_from='arn:aws:s3:::s3bucketname/DBNAME.bak', @with_norecovery=0, @type='FULL';
You can track the following import process status using the following Native Tracking of Process guide.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Procedural.Importing.html#SQLServer.Procedural.Importing.Native.Tracking
If you receive this error during import you must create an RDS "Option Group".
Msg 50000, Level 16, State 0, Procedure msdb.dbo.rds_restore_database, Line 80 [Batch Start Line 0] Database backup/restore option is not enabled yet or is in the process of being enabled. Please try again later. USAGE: EXECUTE msdb.dbo.rds_restore_database @restore_db_name, @s3_arn_to_restore_from, [@kms_master_key_arn], [@type], [@with_norecovery] @restore_db_name : Name of the database being restored @s3_arn_to_restore_from : S3 ARN of the backup file used to restore database from. @kms_master_key_arn : KMS customer master key ARN to decrypt the backup file with. @type : The type of restore. Valid types are FULL or DIFFERENTIAL. Defaults to FULL. @with_norecovery : The recovery clause to use for the restore operation. Set this to 0, to restore with RECOVERY (database will be online after the restore). Set this to 1, to restore with NORECOVERY (database will be left in the RESTORING state allowing for subsequent differential or log restores). For FULL restore, defaults to 0. For DIFFERENTIAL restores, you must specify 0 or 1.
Navigate to the Amazon RDS portal.
Click Options Group > Create Group
- Provide a non-space, no caps name (For example, mssqlse)
- Provide description (For example, MSSQL Standard Edition)
- Choose "sqlserver-se" for Standard Edition MSSQL
- Choose Engine Version (14.00 in this case)
The new Options Group is now displayed in the available Options Groups for your Amazon RDS portal page.
- Select the newly created Options Group and Add Option.
- Choose SQLSERVER_BACKUP_RESTORE for the Option Details name.
- Choose "Create Custom" from the IAM dropdown option.
- Choose immediately for the Scheduling option.
- Select Add option.
Back on the Amazon RDS DB Portal Page
This will associate a group that will permit the import of a database into the AWS RDS MSSQL instance.
I have done this for a Vended application and SharePoint 2019 successfully thus far, Happy importing! Note - The folowing server-level roles are not available from within the AWS RDS MSSQL instance. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_SQLServer.html - bulkadmin
dbcreator diskadmin securityadmin serveradmin sysadmin
|
Follow javascript: SP.SOD.executeFunc('followingcommon.js', 'FollowDoc', function() { FollowDoc('{ListId}', {ItemId}); }); 0x0 0x0 ContentType 0x01 1100 Edit in Dashboard Designer /_layouts/15/images/ppsEditDesigner.png?rev=43 /_layouts/15/ppswebparts/DesignerRedirect.aspx?Operation=OpenItem&ItemLocation={ItemUrl} 0x0 0x0 ContentType 0x0101004C06BE72B56941358D9BD0B31603EC4D 230 Edit in Dashboard Designer /_layouts/15/images/PPSSiteTemplateRun.png?rev=43 {SiteUrl}/_layouts/15/ppswebparts/DesignerRedirect.aspx?Operation=OpenItem&ItemLocation={ItemUrl}&ItemType=Dashboard 0x0 0x0 ContentType 0x01002DDC53CB1D5F4520BE0568558051291F06 230 Edit in Dashboard Designer /_layouts/15/images/PPSSiteTemplateRun.png?rev=43 {SiteUrl}/_layouts/15/ppswebparts/DesignerRedirect.aspx?Operation=OpenItem&ItemLocation={ItemUrl}&ItemType=Filter 0x0 0x0 ContentType 0x01002DDC53CB1D5F4520BE0568558051291F05 230 Edit in Dashboard Designer /_layouts/15/images/PPSSiteTemplateRun.png?rev=43 {SiteUrl}/_layouts/15/ppswebparts/DesignerRedirect.aspx?Operation=OpenItem&ItemLocation={ItemUrl}&ItemType=Indicator 0x0 0x0 ContentType 0x01002DDC53CB1D5F4520BE0568558051291F03 230 Edit in Dashboard Designer /_layouts/15/images/PPSSiteTemplateRun.png?rev=43 {SiteUrl}/_layouts/15/ppswebparts/DesignerRedirect.aspx?Operation=OpenItem&ItemLocation={ItemUrl}&ItemType=Kpi 0x0 0x0 ContentType 0x01002DDC53CB1D5F4520BE0568558051291F01 230 Display Report /_layouts/15/images/PPSSiteTemplateRun.png?rev=43 javascript:window.open('{SiteUrl}/_layouts/15/ppswebparts/ReportViewPreview.aspx?SiteLocation={SiteUrl}&ItemLocation={ItemUrl}') 0x0 0x0 ContentType 0x01002DDC53CB1D5F4520BE0568558051291F04 231 Edit in Dashboard Designer /_layouts/15/images/ppsEditDesigner.png?rev=43 {SiteUrl}/_layouts/15/ppswebparts/DesignerRedirect.aspx?Operation=OpenItem&ItemLocation={ItemUrl}&ItemType=ReportView 0x0 0x0 ContentType 0x01002DDC53CB1D5F4520BE0568558051291F04 230 Display Scorecard /_layouts/15/images/PPSSiteTemplateRun.png?rev=43 javascript:window.open('{SiteUrl}/_layouts/15/ppswebparts/ScorecardPreview.aspx?SiteLocation={SiteUrl}&ItemLocation={ItemUrl}') 0x0 0x0 ContentType 0x01002DDC53CB1D5F4520BE0568558051291F02 231 Edit in Dashboard Designer /_layouts/15/images/ppsEditDesigner.png?rev=43 {SiteUrl}/_layouts/15/ppswebparts/DesignerRedirect.aspx?Operation=OpenItem&ItemLocation={ItemUrl}&ItemType=Scorecard 0x0 0x0 ContentType 0x01002DDC53CB1D5F4520BE0568558051291F02 230 Compliance Details javascript:if (typeof CalloutManager !== 'undefined' && Boolean(CalloutManager) && Boolean(CalloutManager.closeAll)) CalloutManager.closeAll(); commonShowModalDialog('{SiteUrl}'+
'/_layouts/15/itemexpiration.aspx'
+'?ID={ItemId}&List={ListId}', 'center:1;dialogHeight:500px;dialogWidth:500px;resizable:yes;status:no;location:no;menubar:no;help:no', function GotoPageAfterClose(pageid){if(pageid == 'hold') {STSNavigate(unescape(decodeURI('{SiteUrl}'))+
'/_layouts/15/hold.aspx'
+'?ID={ItemId}&List={ListId}'); return false;} if(pageid == 'audit') {STSNavigate(unescape(decodeURI('{SiteUrl}'))+
'/_layouts/15/Reporting.aspx'
+'?Category=Auditing&backtype=item&ID={ItemId}&List={ListId}'); return false;} if(pageid == 'config') {STSNavigate(unescape(decodeURI('{SiteUrl}'))+
'/_layouts/15/expirationconfig.aspx'
+'?ID={ItemId}&List={ListId}'); return false;} if(pageid == 'tag') {STSNavigate(unescape(decodeURI('{SiteUrl}'))+
'/_layouts/15/Hold.aspx'
+'?Tag=true&ID={ItemId}&List={ListId}'); return false;}}, null); 0x0 0x1 ContentType 0x01 898 Document Set Version History /_layouts/15/images/versions.gif?rev=43 javascript:SP.UI.ModalDialog.ShowPopupDialog('{SiteUrl}'+
'/_layouts/15/DocSetVersions.aspx'
+ '?List={ListId}&ID={ItemId}') 0x0 0x0 ContentType 0x0120D520 330 Send To other location /_layouts/15/images/sendOtherLoc.gif?rev=43 javascript:GoToPage('{SiteUrl}' +
'/_layouts/15/docsetsend.aspx'
+ '?List={ListId}&ID={ItemId}') 0x0 0x0 ContentType 0x0120D520 350
TLS 1.2, AES with 256 bit encryption
|
|
|