Latest Posts

[2018-March-New]Braindump2go Valid SY0-501 VCE Dumps for 100% Passing Exam SY0-501[238-250]

2018 March Latest CompTIA SY0-501 Exam Dumps with PDF and VCE Free Updated Today! Following are some new SY0-501 Real Exam Questions:

1.|2018 Latest SY0-501 Exam Dumps (PDF & VCE) 250Q&As Download:
https://www.braindump2go.com/sy0-501.html

2.|2018 Latest SY0-501 Exam Questions & Answers Download:
https://drive.google.com/drive/folders/1QYBwvoau8PlTQ3bugQuy0pES-zrLrRB1?usp=sharing

QUESTION 238
Joe, a user, wants to send Ann, another user, a confidential document electronically. Which of the following should Joe do to ensure the document is protected from eavesdropping?

A. Encrypt it with Joe’s private key
B. Encrypt it with Joe’s public key
C. Encrypt it with Ann’s private key
D. Encrypt it with Ann’s public key

Answer: D

QUESTION 239
A director of IR is reviewing a report regarding several recent breaches. The director compiles the following statistic’s
– Initial IR engagement time frame
– Length of time before an executive management notice went out
– Average IR phase completion
The director wants to use the data to shorten the response time. Which of the following would accomplish this?

A. CSIRT
B. Containment phase
C. Escalation notifications
D. Tabletop exercise

Answer: D

QUESTION 240
To reduce disk consumption, an organization’s legal department has recently approved a new policy setting the data retention period for sent email at six months. Which of the following is the BEST way to ensure this goal is met?

A. Create a daily encrypted backup of the relevant emails.
B. Configure the email server to delete the relevant emails.
C. Migrate the relevant emails into an “Archived” folder.
D. Implement automatic disk compression on email servers.

Answer: A

QUESTION 241
A security administrator is configuring a new network segment, which contains devices that will be accessed by external users, such as web and FTP server. Which of the following represents the MOST secure way to configure the new network segment?

A. The segment should be placed on a separate VLAN, and the firewall rules should be configured to allow external traffic.
B. The segment should be placed in the existing internal VLAN to allow internal traffic only.
C. The segment should be placed on an intranet, and the firewall rules should be configured to allow external traffic.
D. The segment should be placed on an extranet, and the firewall rules should be configured to allow both internal and external traffic.

Answer: A

QUESTION 242
Which of the following types of attacks precedes the installation of a rootkit on a server?

A. Pharming
B. DDoS
C. Privilege escalation
D. DoS

Answer: C

QUESTION 243
Which of the following cryptographic algorithms is irreversible?

A. RC4
B. SHA-256
C. DES
D. AES

Answer: B

QUESTION 244
A security analyst receives an alert from a WAF with the following payload:
var data= “<test test test>” ++ <../../../../../../etc/passwd>”
Which of the following types of attacks is this?

A. Cross-site request forgery
B. Buffer overflow
C. SQL injection
D. JavaScript data insertion
E. Firewall evasion scipt

Answer: D

QUESTION 245
A workstation puts out a network request to locate another system. Joe, a hacker on the network, responds before the real system does, and he tricks the workstation into communicating with him. Which of the following BEST describes what occurred?

A. The hacker used a race condition.
B. The hacker used a pass-the-hash attack.
C. The hacker-exploited importer key management.
D. The hacker exploited weak switch configuration.

Answer: D

QUESTION 246
Audit logs from a small company’s vulnerability scanning software show the following findings:
Destinations scanned:
-Server001- Internal human resources payroll server
-Server101- Internet-facing web server
-Server201- SQL server for Server101
-Server301- Jumpbox used by systems administrators accessible from the internal network
Validated vulnerabilities found:
-Server001- Vulnerable to buffer overflow exploit that may allow attackers to install software -Server101- Vulnerable to buffer overflow exploit that may allow attackers to install software -Server201- OS updates not fully current
-Server301- Accessible from internal network without the use of jumpbox
-Server301- Vulnerable to highly publicized exploit that can elevate user privileges
Assuming external attackers who are gaining unauthorized information are of the highest concern, which of the following servers should be addressed FIRST?

A. Server001
B. Server101
C. Server201
D. Server301

Answer: B

QUESTION 247
A security analyst wants to harden the company’s VoIP PBX. The analyst is worried that credentials may be intercepted and compromised when IP phones authenticate with the BPX. Which of the following would best prevent this from occurring?

A. Implement SRTP between the phones and the PBX.
B. Place the phones and PBX in their own VLAN.
C. Restrict the phone connections to the PBX.
D. Require SIPS on connections to the PBX.

Answer: D

QUESTION 248
An organization is comparing and contrasting migration from its standard desktop configuration to the newest version of the platform. Before this can happen, the Chief Information Security Officer (CISO) voices the need to evaluate the functionality of the newer desktop platform to ensure interoperability with existing software in use by the organization. In which of the following principles of architecture and design is the CISO engaging?

A. Dynamic analysis
B. Change management
C. Baselining
D. Waterfalling

Answer: B

QUESTION 249
A security administrator suspects a MITM attack aimed at impersonating the default gateway is underway. Which of the following tools should the administrator use to detect this attack? (Select two.)

A. Ping
B. Ipconfig
C. Tracert
D. Netstat
E. Dig
F. Nslookup

Answer: BC

QUESTION 250
A user is presented with the following items during the new-hire onboarding process:
– Laptop
– Secure USB drive
– Hardware OTP token
– External high-capacity HDD
– Password complexity policy
– Acceptable use policy
– HASP key
– Cable lock
Which of the following is one component of multifactor authentication?

A. Secure USB drive
B. Cable lock
C. Hardware OTP token
D. HASP key

Answer: C


!!!RECOMMEND!!!

1.|2018 Latest SY0-501 Exam Dumps (PDF & VCE) 250Q&As Download:
https://www.braindump2go.com/sy0-501.html

2.|2018 Latest SY0-501 Study Guide Video:
https://youtu.be/d7_Sx-zuFKI

[2018-March-New]Braindump2go New SY0-501 Exam Dumps Updated for Free Downloading[227-237]

2018 March Latest CompTIA SY0-501 Exam Dumps with PDF and VCE Free Updated Today! Following are some new SY0-501 Real Exam Questions:

1.|2018 Latest SY0-501 Exam Dumps (PDF & VCE) 250Q&As Download:
https://www.braindump2go.com/sy0-501.html

2.|2018 Latest SY0-501 Exam Questions & Answers Download:
https://drive.google.com/drive/folders/1QYBwvoau8PlTQ3bugQuy0pES-zrLrRB1?usp=sharing

QUESTION 227
An audit takes place after company-wide restricting, in which several employees changed roles. The following deficiencies are found during the audit regarding access to confidential data:

Which of the following would be the BEST method to prevent similar audit findings in the future?

A. Implement separation of duties for the payroll department.
B. Implement a DLP solution on the payroll and human resources servers.
C. Implement rule-based access controls on the human resources server.
D. Implement regular permission auditing and reviews.

Answer: A

QUESTION 228
A security engineer is configuring a wireless network that must support mutual authentication of the wireless client and the authentication server before users provide credentials. The wireless network must also support authentication with usernames and passwords. Which of the following authentication protocols MUST the security engineer select?

A. EAP-FAST
B. EAP-TLS
C. PEAP
D. EAP

Answer: C

QUESTION 229
A system’s administrator has finished configuring firewall ACL to allow access to a new web answer.
PERMIT TCP from: ANY to: 192.168.1.10:80
PERMIT TCP from: ANY to: 192.168.1.10:443
DENY TCP from: ANY to: ANY
The security administrator confirms form the following packet capture that there is network traffic from the internet to the web server:
TCP 10.23.243.2:2000->192.168.1.10:80 POST/default’s
TCP 172.16.4.100:1934->192.168.1.10:80 GET/session.aspx?user_1_sessionid= a12ad8741d8f7e7ac723847aa8231a
The company’s internal auditor issues a security finding and requests that immediate action be taken. With which of the following is the auditor MOST concerned?

A. Misconfigured firewall
B. Clear text credentials
C. Implicit deny
D. Default configuration

Answer: B

QUESTION 230
Which of the following vulnerability types would the type of hacker known as a script kiddie be MOST dangerous against?

A. Passwords written on the bottom of a keyboard
B. Unpatched exploitable Internet-facing services
C. Unencrypted backup tapes
D. Misplaced hardware token

Answer: B

QUESTION 231
A company hired a third-party firm to conduct as assessment of vulnerabilities exposed to the Internet. The firm informs the company that an exploit exists for an FTP server that has a version installed from eight years ago. The company has decided to keep the system online anyway, as no upgrade exists from the vendor. Which of the following BEST describes the reason why the vulnerability exists?

A. Default configuration
B. End-of-life
C. Weak cipher suite
D. Zero-day threats

Answer: B

QUESTION 232
An in-house penetration tester is using a packet capture device to listen in on network communications.
This is an example of:

A. Passive reconnaissance
B. Persistence
C. Escalation of privileges
D. Exploiting the switch

Answer: D

QUESTION 233
A black hat hacker is enumerating a network and wants to remain convert during the process. The hacker initiates a vulnerability scan. Given the task at hand the requirement of being convert, which of the following statements BEST indicates that the vulnerability scan meets these requirements?

A. The vulnerability scanner is performing an authenticated scan.
B. The vulnerability scanner is performing local file integrity checks.
C. The vulnerability scanner is performing in network sniffer mode.
D. The vulnerability scanner is performing banner grabbing.

Answer: C

QUESTION 234
A development team has adopted a new approach to projects in which feedback is iterative and multiple iterations of deployments are provided within an application’s full life cycle. Which of the following software development methodologies is the development team using?

A. Waterfall
B. Agile
C. Rapid
D. Extreme

Answer: B

QUESTION 235
A Chief Executive Officer (CEO) suspects someone in the lab testing environment is stealing confidential information after working hours when no one else is around. Which of the following actions can help to prevent this specific threat?

A. Implement time-of-day restrictions.
B. Audit file access times.
C. Secretly install a hidden surveillance camera.
D. Require swipe-card access to enter the lab.

Answer: A

QUESTION 236
A company hires a third-party firm to conduct an assessment of vulnerabilities exposed to the Internet. The firm informs the company that an exploit exists for an FTP server that had a version installed from eight years ago. The company has decided to keep the system online anyway, as no upgrade exists form the vendor. Which of the following BEST describes the reason why the vulnerability exists?

A. Default configuration
B. End-of-life system
C. Weak cipher suite
D. Zero-day threats

Answer: B

QUESTION 237
An organization uses SSO authentication for employee access to network resources. When an employee resigns, as per the organization’s security policy, the employee’s access to all network resources is terminated immediately. Two weeks later, the former employee sends an email to the help desk for a password reset to access payroll information from the human resources server. Which of the following represents the BEST course of action?

A. Approve the former employee’s request, as a password reset would give the former employee access to only the human resources server.
B. Deny the former employee’s request, since the password reset request came from an external email address.
C. Deny the former employee’s request, as a password reset would give the employee access to all network resources.
D. Approve the former employee’s request, as there would not be a security issue with the former employee gaining access to network.

Answer: C


!!!RECOMMEND!!!

1.|2018 Latest SY0-501 Exam Dumps (PDF & VCE) 250Q&As Download:
https://www.braindump2go.com/sy0-501.html

2.|2018 Latest SY0-501 Study Guide Video:
https://youtu.be/d7_Sx-zuFKI

[2018-March-New]Valid SY0-501 Free Dumps Offered By Braindump2go[216-226]

2018 March Latest CompTIA SY0-501 Exam Dumps with PDF and VCE Free Updated Today! Following are some new SY0-501 Real Exam Questions:

1.|2018 Latest SY0-501 Exam Dumps (PDF & VCE) 250Q&As Download:
https://www.braindump2go.com/sy0-501.html

2.|2018 Latest SY0-501 Exam Questions & Answers Download:
https://drive.google.com/drive/folders/1QYBwvoau8PlTQ3bugQuy0pES-zrLrRB1?usp=sharing

QUESTION 216
As part of the SDLC, a third party is hired to perform a penetration test. The third party will have access to the source code, integration tests, and network diagrams. Which of the following BEST describes the assessment being performed?

A. Black box
B. Regression
C. White box
D. Fuzzing

Answer: C

QUESTION 217
A dumpster diver recovers several hard drives from a company and is able to obtain confidential data from one of the hard drives. The company then discovers its information is posted online. Which of the following methods would have MOST likely prevented the data from being exposed?

A. Removing the hard drive from its enclosure
B. Using software to repeatedly rewrite over the disk space
C. Using Blowfish encryption on the hard drives
D. Using magnetic fields to erase the data

Answer: D

QUESTION 218
Which of the following are methods to implement HA in a web application server environment? (Select two.)

A. Load balancers
B. Application layer firewalls
C. Reverse proxies
D. VPN concentrators
E. Routers

Answer: AB

QUESTION 219
An application developer is designing an application involving secure transports from one service to another that will pass over port 80 for a request.
Which of the following secure protocols is the developer MOST likely to use?

A. FTPS
B. SFTP
C. SSL
D. LDAPS

Answer: C

QUESTION 220
Which of the following precautions MINIMIZES the risk from network attacks directed at multifunction printers, as well as the impact on functionality at the same time?

A. Isolating the systems using VLANs
B. Installing a software-based IPS on all devices
C. Enabling full disk encryption
D. Implementing a unique user PIN access functions

Answer: A

QUESTION 221
After an identified security breach, an analyst is tasked to initiate the IR process. Which of the following is the NEXT step the analyst should take?

A. Recovery
B. Identification
C. Preparation
D. Documentation
E. Escalation

Answer: B

QUESTION 222
A company was recently audited by a third party. The audit revealed the company’s network devices were transferring files in the clear. Which of the following protocols should the company use to transfer files?

A. HTTPS
B. LDAPS
C. SCP
D. SNMP3

Answer: C

QUESTION 223
During a monthly vulnerability scan, a server was flagged for being vulnerable to an Apache Struts exploit. Upon further investigation, the developer responsible for the server informs the security team that Apache Struts is not installed on the server. Which of the following BEST describes how the security team should reach to this incident?

A. The finding is a false positive and can be disregarded
B. The Struts module needs to be hardened on the server
C. The Apache software on the server needs to be patched and updated
D. The server has been compromised by malware and needs to be quarantined.

Answer: D

QUESTION 224
A systems administrator wants to protect data stored on mobile devices that are used to scan and record assets in a warehouse. The control must automatically destroy the secure container of mobile devices if they leave the warehouse. Which of the following should the administrator implement? (Select two.)

A. Geofencing
B. Remote wipe
C. Near-field communication
D. Push notification services
E. Containerization

Answer: AE

QUESTION 225
A security analyst is performing a quantitative risk analysis. The risk analysis should show the potential monetary loss each time a threat or event occurs. Given this requirement, which of the following concepts would assist the analyst in determining this value? (Select two.)

A. ALE
B. AV
C. ARO
D. EF
E. ROI

Answer: BD

QUESTION 226
Which of the following AES modes of operation provide authentication? (Select two.)

A. CCM
B. CBC
C. GCM
D. DSA
E. CFB

Answer: AC


!!!RECOMMEND!!!

1.|2018 Latest SY0-501 Exam Dumps (PDF & VCE) 250Q&As Download:
https://www.braindump2go.com/sy0-501.html

2.|2018 Latest SY0-501 Study Guide Video:
https://youtu.be/d7_Sx-zuFKI

[2018-March-New]Free SY0-501 PDF Dumps and SY0-501 VCE Dumps Offered by Braindump2go[205-215]

2018 March Latest CompTIA SY0-501 Exam Dumps with PDF and VCE Free Updated Today! Following are some new SY0-501 Real Exam Questions:

1.|2018 Latest SY0-501 Exam Dumps (PDF & VCE) 250Q&As Download:
https://www.braindump2go.com/sy0-501.html

2.|2018 Latest SY0-501 Exam Questions & Answers Download:
https://drive.google.com/drive/folders/1QYBwvoau8PlTQ3bugQuy0pES-zrLrRB1?usp=sharing

QUESTION 205
A new firewall has been places into service at an organization. However, a configuration has not been entered on the firewall. Employees on the network segment covered by the new firewall report they are unable to access the network. Which of the following steps should be completed to BEST resolve the issue?

A. The firewall should be configured to prevent user traffic form matching the implicit deny rule.
B. The firewall should be configured with access lists to allow inbound and outbound traffic.
C. The firewall should be configured with port security to allow traffic.
D. The firewall should be configured to include an explicit deny rule.

Answer: A

QUESTION 206
A security analyst is testing both Windows and Linux systems for unauthorized DNS zone transfers within a LAN on comptia.org from example.org.
Which of the following commands should the security analyst use? (Select two.)

A. nslookup
comptia.org
set type=ANY
ls-d example.org
B. nslookup
comptia.org
set type=MX
example.org
C. dig -axfr comptia.org@example.org
D. ipconfig/flushDNS
E. ifconfig eth0 down
ifconfig eth0 up
dhclient renew
F. dig@example.org comptia.org

Answer: AC

QUESTION 207
Which of the following are the MAIN reasons why a systems administrator would install security patches in a staging environment before the patches are applied to the production server? (Select two.)

A. To prevent server availability issues
B. To verify the appropriate patch is being installed
C. To generate a new baseline hash after patching
D. To allow users to test functionality
E. To ensure users are trained on new functionality

Answer: AD

QUESTION 208
A Chief Information Officer (CIO) drafts an agreement between the organization and its employees. The agreement outlines ramifications for releasing information without consent and/for approvals. Which of the following BEST describes this type of agreement?

A. ISA
B. NDA
C. MOU
D. SLA

Answer: B

QUESTION 209
Which of the following would meet the requirements for multifactor authentication?

A. Username, PIN, and employee ID number
B. Fingerprint and password
C. Smart card and hardware token
D. Voice recognition and retina scan

Answer: B

QUESTION 210
A manager suspects that an IT employee with elevated database access may be knowingly modifying financial transactions for the benefit of a competitor. Which of the following practices should the manager implement to validate the concern?

A. Separation of duties
B. Mandatory vacations
C. Background checks
D. Security awareness training

Answer: A

QUESTION 211
A penetration tester finds that a company’s login credentials for the email client were client being sent in clear text. Which of the following should be done to provide encrypted logins to the email server?

A. Enable IPSec and configure SMTP.
B. Enable SSH and LDAP credentials.
C. Enable MIME services and POP3.
D. Enable an SSL certificate for IMAP services.

Answer: D

QUESTION 212
Before an infection was detected, several of the infected devices attempted to access a URL that was similar to the company name but with two letters transported. Which of the following BEST describes the attack vector used to infect the devices?

A. Cross-site scripting
B. DNS poisoning
C. Typo squatting
D. URL hijacking

Answer: C

QUESTION 213
A system administrator is reviewing the following information from a compromised server.

Given the above information, which of the following processes was MOST likely exploited via remote buffer overflow attack?

A. Apache
B. LSASS
C. MySQL
D. TFTP

Answer: D

QUESTION 214
Joe, a security administrator, needs to extend the organization’s remote access functionality to be used by staff while travelling. Joe needs to maintain separate access control functionalities for internal, external, and VOIP services. Which of the following represents the BEST access technology for Joe to use?

A. RADIUS
B. TACACS+
C. Diameter
D. Kerberos

Answer: B

QUESTION 215
The availability of a system has been labeled as the highest priority. Which of the following should be focused on the MOST to ensure the objective?

A. Authentication
B. HVAC
C. Full-disk encryption
D. File integrity checking

Answer: B


!!!RECOMMEND!!!

1.|2018 Latest SY0-501 Exam Dumps (PDF & VCE) 250Q&As Download:
https://www.braindump2go.com/sy0-501.html

2.|2018 Latest SY0-501 Study Guide Video:
https://youtu.be/d7_Sx-zuFKI

[2018-March-New]Exam SY0-501 PDF and VCE Dumps 250Q Free Offered by Braindump2go[205-215]

2018 March Latest CompTIA SY0-501 Exam Dumps with PDF and VCE Free Updated Today! Following are some new SY0-501 Real Exam Questions:

1.|2018 Latest SY0-501 Exam Dumps (PDF & VCE) 250Q&As Download:
https://www.braindump2go.com/sy0-501.html

2.|2018 Latest SY0-501 Exam Questions & Answers Download:
https://drive.google.com/drive/folders/1QYBwvoau8PlTQ3bugQuy0pES-zrLrRB1?usp=sharing

QUESTION 205
A new firewall has been places into service at an organization. However, a configuration has not been entered on the firewall. Employees on the network segment covered by the new firewall report they are unable to access the network. Which of the following steps should be completed to BEST resolve the issue?

A. The firewall should be configured to prevent user traffic form matching the implicit deny rule.
B. The firewall should be configured with access lists to allow inbound and outbound traffic.
C. The firewall should be configured with port security to allow traffic.
D. The firewall should be configured to include an explicit deny rule.

Answer: A

QUESTION 206
A security analyst is testing both Windows and Linux systems for unauthorized DNS zone transfers within a LAN on comptia.org from example.org.
Which of the following commands should the security analyst use? (Select two.)

A. nslookup
comptia.org
set type=ANY
ls-d example.org
B. nslookup
comptia.org
set type=MX
example.org
C. dig -axfr comptia.org@example.org
D. ipconfig/flushDNS
E. ifconfig eth0 down
ifconfig eth0 up
dhclient renew
F. dig@example.org comptia.org

Answer: AC

QUESTION 207
Which of the following are the MAIN reasons why a systems administrator would install security patches in a staging environment before the patches are applied to the production server? (Select two.)

A. To prevent server availability issues
B. To verify the appropriate patch is being installed
C. To generate a new baseline hash after patching
D. To allow users to test functionality
E. To ensure users are trained on new functionality

Answer: AD

QUESTION 208
A Chief Information Officer (CIO) drafts an agreement between the organization and its employees. The agreement outlines ramifications for releasing information without consent and/for approvals. Which of the following BEST describes this type of agreement?

A. ISA
B. NDA
C. MOU
D. SLA

Answer: B

QUESTION 209
Which of the following would meet the requirements for multifactor authentication?

A. Username, PIN, and employee ID number
B. Fingerprint and password
C. Smart card and hardware token
D. Voice recognition and retina scan

Answer: B

QUESTION 210
A manager suspects that an IT employee with elevated database access may be knowingly modifying financial transactions for the benefit of a competitor. Which of the following practices should the manager implement to validate the concern?

A. Separation of duties
B. Mandatory vacations
C. Background checks
D. Security awareness training

Answer: A

QUESTION 211
A penetration tester finds that a company’s login credentials for the email client were client being sent in clear text. Which of the following should be done to provide encrypted logins to the email server?

A. Enable IPSec and configure SMTP.
B. Enable SSH and LDAP credentials.
C. Enable MIME services and POP3.
D. Enable an SSL certificate for IMAP services.

Answer: D

QUESTION 212
Before an infection was detected, several of the infected devices attempted to access a URL that was similar to the company name but with two letters transported. Which of the following BEST describes the attack vector used to infect the devices?

A. Cross-site scripting
B. DNS poisoning
C. Typo squatting
D. URL hijacking

Answer: C

QUESTION 213
A system administrator is reviewing the following information from a compromised server.

Given the above information, which of the following processes was MOST likely exploited via remote buffer overflow attack?

A. Apache
B. LSASS
C. MySQL
D. TFTP

Answer: D

QUESTION 214
Joe, a security administrator, needs to extend the organization’s remote access functionality to be used by staff while travelling. Joe needs to maintain separate access control functionalities for internal, external, and VOIP services. Which of the following represents the BEST access technology for Joe to use?

A. RADIUS
B. TACACS+
C. Diameter
D. Kerberos

Answer: B

QUESTION 215
The availability of a system has been labeled as the highest priority. Which of the following should be focused on the MOST to ensure the objective?

A. Authentication
B. HVAC
C. Full-disk encryption
D. File integrity checking

Answer: B


!!!RECOMMEND!!!

1.|2018 Latest SY0-501 Exam Dumps (PDF & VCE) 250Q&As Download:
https://www.braindump2go.com/sy0-501.html

2.|2018 Latest SY0-501 Study Guide Video:
https://youtu.be/d7_Sx-zuFKI

[2018-March-New]SY0-501 Dumps-PDF and VCE(Full Version)250Q Download in Braindump2go[194-204]

2018 March Latest CompTIA SY0-501 Exam Dumps with PDF and VCE Free Updated Today! Following are some new SY0-501 Real Exam Questions:

1.|2018 Latest SY0-501 Exam Dumps (PDF & VCE) 250Q&As Download:
https://www.braindump2go.com/sy0-501.html

2.|2018 Latest SY0-501 Exam Questions & Answers Download:
https://drive.google.com/drive/folders/1QYBwvoau8PlTQ3bugQuy0pES-zrLrRB1?usp=sharing

QUESTION 194
An organization’s file server has been virtualized to reduce costs. Which of the following types of backups would be MOST appropriate for the particular file server?

A. Snapshot
B. Full
C. Incremental
D. Differential

Answer: C

QUESTION 195
A wireless network uses a RADIUS server that is connected to an authenticator, which in turn connects to a supplicant. Which of the following represents the authentication architecture in use?

A. Open systems authentication
B. Captive portal
C. RADIUS federation
D. 802.1x

Answer: D

QUESTION 196
An employer requires that employees use a key-generating app on their smartphones to log into corporate applications. In terms of authentication of an individual, this type of access policy is BEST defined as:

A. Something you have.
B. Something you know.
C. Something you do.
D. Something you are.

Answer: A

QUESTION 197
Adhering to a layered security approach, a controlled access facility employs security guards who verify the authorization of all personnel entering the facility. Which of the following terms BEST describes the security control being employed?

A. Administrative
B. Corrective
C. Deterrent
D. Compensating

Answer: A

QUESTION 198
A security analyst is hardening a web server, which should allow a secure certificate-based session using the organization’s PKI infrastructure. The web server should also utilize the latest security techniques and standards. Given this set of requirements, which of the following techniques should the analyst implement to BEST meet these requirements? (Select two.)

A. Install an X- 509-compliant certificate.
B. Implement a CRL using an authorized CA.
C. Enable and configure TLS on the server.
D. Install a certificate signed by a public CA.
E. Configure the web server to use a host header.

Answer: AC

QUESTION 199
A manager wants to distribute a report to several other managers within the company. Some of them reside in remote locations that are not connected to the domain but have a local server. Because there is sensitive data within the report and the size of the report is beyond the limit of the email attachment size, emailing the report is not an option. Which of the following protocols should be implemented to distribute the report securely? (Select three.)

A. S/MIME
B. SSH
C. SNMPv3
D. FTPS
E. SRTP
F. HTTPS
G. LDAPS

Answer: BDF

QUESTION 200
An auditor is reviewing the following output from a password-cracking tool:
User:1: Password1
User2: Recovery!
User3: Alaskan10
User4: 4Private
User5: PerForMance2
Which of the following methods did the author MOST likely use?

A. Hybrid
B. Dictionary
C. Brute force
D. Rainbow table

Answer: A

QUESTION 201
Which of the following must be intact for evidence to be admissible in court?

A. Chain of custody
B. Order of violation
C. Legal hold
D. Preservation

Answer: A

QUESTION 202
A vulnerability scanner that uses its running service’s access level to better assess vulnerabilities across multiple assets within an organization is performing a:

A. Credentialed scan.
B. Non-intrusive scan.
C. Privilege escalation test.
D. Passive scan.

Answer: A

QUESTION 203
Which of the following cryptography algorithms will produce a fixed-length, irreversible output?

A. AES
B. 3DES
C. RSA
D. MD5

Answer: D

QUESTION 204
A technician suspects that a system has been compromised. The technician reviews the following log entry:
WARNING- hash mismatch: C:\Window\SysWOW64\user32.dll
WARNING- hash mismatch: C:\Window\SysWOW64\kernel32.dll
Based solely ono the above information, which of the following types of malware is MOST likely installed on the system?

A. Rootkit
B. Ransomware
C. Trojan
D. Backdoor

Answer: A


!!!RECOMMEND!!!

1.|2018 Latest SY0-501 Exam Dumps (PDF & VCE) 250Q&As Download:
https://www.braindump2go.com/sy0-501.html

2.|2018 Latest SY0-501 Study Guide Video:
https://youtu.be/d7_Sx-zuFKI

[2018-March-New]SY0-501 PDF and VCE Free Download in Braindump2go[183-193]

2018 March Latest CompTIA SY0-501 Exam Dumps with PDF and VCE Free Updated Today! Following are some new SY0-501 Real Exam Questions:

1.|2018 Latest SY0-501 Exam Dumps (PDF & VCE) 250Q&As Download:
https://www.braindump2go.com/sy0-501.html

2.|2018 Latest SY0-501 Exam Questions & Answers Download:
https://drive.google.com/drive/folders/1QYBwvoau8PlTQ3bugQuy0pES-zrLrRB1?usp=sharing

QUESTION 183
A system administrator wants to provide balance between the security of a wireless network and usability. The administrator is concerned with wireless encryption compatibility of older devices used by some employees. Which of the following would provide strong security and backward compatibility when accessing the wireless network?

A. Open wireless network and SSL VPN
B. WPA using a preshared key
C. WPA2 using a RADIUS back-end for 802.1x authentication
D. WEP with a 40-bit key

Answer: C

QUESTION 184
An information security specialist is reviewing the following output from a Linux server.

Based on the above information, which of the following types of malware was installed on the server? / local/

A. Logic bomb
B. Trojan
C. Backdoor
D. Ransomware
E. Rootkit

Answer: C

QUESTION 185
In terms of encrypting data, which of the following is BEST described as a way to safeguard password data by adding random data to it in storage?

A. Using salt
B. Using hash algorithms
C. Implementing elliptical curve
D. Implementing PKI

Answer: A

QUESTION 186
A system administrator wants to provide for and enforce wireless access accountability during events where external speakers are invited to make presentations to a mixed audience of employees and non-employees. Which of the following should the administrator implement?

A. Shared accounts
B. Preshared passwords
C. Least privilege
D. Sponsored guest

Answer: D

QUESTION 187
Which of the following would MOST likely appear in an uncredentialed vulnerability scan?

A. Self-signed certificates
B. Missing patches
C. Auditing parameters
D. Inactive local accounts

Answer: D

QUESTION 188
A security analyst observes the following events in the logs of an employee workstation:

Given the information provided, which of the following MOST likely occurred on the workstation?

A. Application whitelisting controls blocked an exploit payload from executing.
B. Antivirus software found and quarantined three malware files.
C. Automatic updates were initiated but failed because they had not been approved.
D. The SIEM log agent was not turned properly and reported a false positive.

Answer: A

QUESTION 189
When identifying a company’s most valuable assets as part of a BIA, which of the following should be the FIRST priority?

A. Life
B. Intellectual property
C. Sensitive data
D. Public reputation

Answer: A

QUESTION 190
An organization needs to implement a large PKI. Network engineers are concerned that repeated transmission of the OCSP will impact network performance. Which of the following should the security analyst recommend is lieu of an OCSP?

A. CSR
B. CRL
C. CA
D. OID

Answer: B

QUESTION 191
When considering a third-party cloud service provider, which of the following criteria would be the BEST to include in the security assessment process? (Select two.)

A. Use of performance analytics
B. Adherence to regulatory compliance
C. Data retention policies
D. Size of the corporation
E. Breadth of applications support

Answer: BC

QUESTION 192
Which of the following occurs when the security of a web application relies on JavaScript for input validation?

A. The integrity of the data is at risk.
B. The security of the application relies on antivirus.
C. A host-based firewall is required.
D. The application is vulnerable to race conditions.

Answer: A

QUESTION 193
An analyst is reviewing a simple program for potential security vulnerabilities before being deployed to a Windows server. Given the following code:

Which of the following vulnerabilities is present?

A. Bad memory pointer
B. Buffer overflow
C. Integer overflow
D. Backdoor

Answer: B


!!!RECOMMEND!!!
1.|2018 Latest SY0-501 Exam Dumps (PDF & VCE) 250Q&As Download:
https://www.braindump2go.com/sy0-501.html

2.|2018 Latest SY0-501 Study Guide Video:
https://youtu.be/d7_Sx-zuFKI

[2018-March-New]Braindump2go 70-767 Dumps VCE 287Q for 100% Passing 70-767 Exam[281-287]

2018 March New Microsoft 70-697 Exam Dumps with PDF and VCE Free Updated Today! Following are some new 70-697 Real Exam Questions:

1.|2018 Latest 70-697 Exam Dumps (PDF & VCE) 287Q&As Download:
https://www.braindump2go.com/70-767.html

2.|2018 Latest 70-697 Exam Questions & Answers Download:
https://drive.google.com/drive/folders/0B75b5xYLjSSNN1RSdlN6Z0VwRjg?usp=sharing

QUESTION 281
Hotspot Question
You deploy a Microsoft Azure SQL Data Warehouse instance. The instance must be available eight hours each day.
You need to pause Azure resources when they are not in use to reduce costs.
What will be the impact of pausing resources? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Answer:

Explanation:
To save costs, you can pause and resume compute resources on-demand. For example, if you won’t be using the database during the night and on weekends, you can pause it during those times, and resume it during the day. You won’t be charged for DWUs while the database is paused.
When you pause a database:
Compute and memory resources are returned to the pool of available resources in the data center
Data Warehouse Unit (DWU) costs are zero for the duration of the pause. Data storage is not affected and your data stays intact. SQL Data Warehouse cancels all running or queued operations.
When you resume a database:
SQL Data Warehouse acquires compute and memory resources for your DWU setting.
Compute charges for your DWUs resume.
Your data will be available.
You will need to restart your workload queries.
References: https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-manage-compute-rest-api

QUESTION 282
Drag and Drop Question
You have a data warehouse.
You need to move a table named Fact.ErrorLog to a new filegroup named LowCost.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

Answer:

Explanation:
Step 1: Add a filegroup named LowCost to the database.
First create a new filegroup.
Step 2:
The next stage is to go to the `Files’ page in the same Properties window and add a file to the filegroup (a filegroup always contains one or more files)
Step 3:
To move a table to a different filegroup involves moving the table’s clustered index to the new filegroup. While this may seem strange at first this is not that surprising when you remember that the leaf level of the clustered index actually contains the table data. Moving the clustered index can be done in a single statement using the DROP_EXISTING clause as follows (using one of the AdventureWorks2008R2 tables as an example) :
CREATE UNIQUE CLUSTERED INDEX PK_Department_DepartmentID ON HumanResources.Department(DepartmentID)
WITH (DROP_EXISTING=ON,ONLINE=ON) ON SECONDARY
This recreates the same index but on the SECONDARY filegroup.
References:
http://www.sqlmatters.com/Articles/Moving%20a%20Table%20to%20a%20Different%20Filegroup.aspx

QUESTION 283
Hotspot Question
You have a Microsoft SQL Server Data Warehouse instance that uses SQL Server Analysis Services (SSAS). The instance has a cube containing data from an on-premises SQL Server instance. A measure named Measure1 is configured to calculate the average of a column.
You plan to change Measure1 to a full additive measure and create a new measure named Measure2 that evaluates data based on the first populated row.
You need to configure the measures.
What should you do? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Answer:

Explanation:
Box 1:
The default setting is SUM (fully additive).
Box 2:
FirstNonEmpty: The member value is evaluated as the value of its first child along the time dimension that contains data.
References: https://docs.microsoft.com/en-us/sql/analysis-services/multidimensional-models/define-semiadditive-behavior

QUESTION 284
Drag and Drop Question
You administer a Microsoft SQL Server Master Data Services (MDS) model. All model entity members have passed validation.
The current model version should be committed to form a record of master data that can be audited and create a new version to allow the ongoing management of the master data.
You lock the current version. You need to manage the model versions.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area, and arrange them in the correct order.

Answer:

Explanation:
Box 1: Validate the current version.
In Master Data Services, validate a version to apply business rules to all members in the model version.
You can validate a version after it has been locked.
Box 2: Commit the current version.
In Master Data Services, commit a version of a model to prevent changes to the model’s members and their attributes. Committed versions cannot be unlocked.
Prerequisites:
Box 3: Create a copy of the current version.
In Master Data Services, copy a version of the model to create a new version of it.

QUESTION 285
Hotspot Question
You manage an inventory system that has a table named Products. The Products table has several hundred columns.
You generate a report that relates two columns named ProductReference and ProductName from the Products table. The result is sorted by a column named QuantityInStock from largest to smallest.
You need to create an index that the report can use.
How should you complete the Transact-SQL statement? To answer, select the appropriate Transact-SQL segments in the answer area.

Answer:

QUESTION 286
Hotspot Question
You have the Microsoft SQL Server Integration Services (SSIS) package shown in the Control flow exhibit. (Click the Exhibit button.)

The package iterates over 100 files in a local folder. For each iteration, the package increments a variable named loop as shown in the Expression task exhibit. (Click the Exhibit button) and then imports a file. The initial value of the variable loop is 0.

You suspect that there may be an issue with the variable value during the loop. You define a breakpoint on the Expression task as shown in the BreakPoint exhibit. (Click the Exhibit button.)

You need to check the value of the loop variable value.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

Answer:

Explanation:
Break condition: When the task or container receives the OnPreExecute event. Called when a task is about to execute. This event is raised by a task or a container immediately before it runs.
The loop variable does not reset.
With the debugger, you can break, or suspend, execution of your program to examine your code, evaluate and edit variables in your program, etc.

QUESTION 287
Hotspot Question
You have a database named DB1. You create a Microsoft SQL Server Integration Services (SSIS) package that incrementally imports data from a table named Customers. The package uses an OLE DB data source for connections to DB1. The package defines the following variables.

To support incremental data loading, you create a table by running the following Transact- SQL segment:

You need to create a DML statements that updates the LastKeyByTable table.
How should you complete the Transact-SQL statement? To answer, select the appropriate Transact-SQL segments in the dialog box in the answer area.

Answer:


!!!RECOMMEND!!!

1.|2018 Latest 70-697 Exam Dumps (PDF & VCE) 287Q&As Download:
https://www.braindump2go.com/70-767.html

2.|2018 Latest 70-697 Study Guide Video:
https://youtu.be/di0FBePt_-w

[2018-March-New]100% Real 70-767 Exam PDF 287Q-Braindump2go[270-280]

2018 March New Microsoft 70-697 Exam Dumps with PDF and VCE Free Updated Today! Following are some new 70-697 Real Exam Questions:

1.|2018 Latest 70-697 Exam Dumps (PDF & VCE) 287Q&As Download:
https://www.braindump2go.com/70-767.html

2.|2018 Latest 70-697 Exam Questions & Answers Download:
https://drive.google.com/drive/folders/0B75b5xYLjSSNN1RSdlN6Z0VwRjg?usp=sharing

QUESTION 270
Hotspot Question
You have a series of analytic data models and reports that provide insights into the participation rates for sports at different schools. Users enter information about sports and participants into a client application. The application stores this transactional data in a Microsoft SQL Server database. A SQL Server Integration Services (SSIS) package loads the data into the models.
When users enter data, they do not consistently apply the correct names for the sports. The following table shows examples of the data entry issues.

You need to create a new knowledge base to improve the quality of the sport name data.
How should you configure the knowledge base? To answer, select the appropriate options in the dialog box in the answer area.

Answer:

Explanation:
Spot 1: Create Knowledge base from: None
Select None if you do not want to base the new knowledge base on an existing knowledge base or data file.

QUESTION 271
Drag and Drop Question
You have a series of analytic data models and reports that provide insights into the participation rates for sports at different schools. Users enter information about sports and participants into a client application. The application stores this transactional data in a Microsoft SQL Server database. A SQL Server Integration Services (SSIS) package loads the data into the models.
When users enter data, they do not consistently apply the correct names for the sports. The following table shows examples of the data entry issues.

You need to improve the quality of the data.
Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

Answer:

Explanation:
https://docs.microsoft.com/en-us/sql/data-quality-services/perform-knowledge-discovery

QUESTION 272
Drag and Drop Question
Note: This question is part of a series of questions that use the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.
You have a Microsoft SQL Server data warehouse instance that supports several client applications.
The data warehouse includes the following tables: Dimension.SalesTerritory, Dimension.Customer, Dimension.Date, Fact.Ticket, and Fact.Order.
The Dimension.SalesTerritory and Dimension.Customer tables are frequently updated.
The Fact.Order table is optimized for weekly reporting, but the company wants to change it daily. The Fact.Order table is loaded by using an ETL process. Indexes have been added to the table over time, but the presence of these indexes slows data loading.
All data in the data warehouse is stored on a shared SAN. All tables are in a database named DB1. You have a second database named DB2 that contains copies of production data for a development environment. The data warehouse has grown and the cost of storage has increased. Data older than one year is accessed infrequently and is considered historical.
You have the following requirements:
– Implement table partitioning to improve the manageability of the data warehouse and to avoid the need to repopulate all transactional data each night.
– Use a partitioning strategy that is as granular as possible.
– Partition the Fact.Order table and retain a total of seven years of data.
– Partition the Fact.Ticket table and retain seven years of data. At the end of each month, the partition structure must apply a sliding window strategy to ensure that a new partition is available for the upcoming month, and that the oldest month of data is archived and removed.
– Optimize data loading for the Dimension.SalesTerritory, Dimension.Customer, and Dimension.Date tables.
– Incrementally load all tables in the database and ensure that all incremental changes are processed.
– Maximize the performance during the data loading process for the Fact.Order partition.
– Ensure that historical data remains online and available for querying.
– Reduce ongoing storage costs while maintaining query performance for current data.
You are not permitted to make changes to the client applications.
You need to configure the Fact.Order table.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

Answer:

Explanation:
From scenario: Partition the Fact.Order table and retain a total of seven years of data. Maximize the performance during the data loading process for the Fact.Order partition.
Step 1: Create a partition function.
Using CREATE PARTITION FUNCTION is the first step in creating a partitioned table or index.
Step 2: Create a partition scheme based on the partition function. To migrate SQL Server partition definitions to SQL Data Warehouse simply:
Step 3: Execute an ALTER TABLE command to specify the partition function.
References: https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-tables-partition

QUESTION 273
Hotspot Question
You manage a data warehouse in a Microsoft SQL Server instance. Company employee information is imported from the human resources system to a table named Employee in the data warehouse instance. The Employee table was created by running the query shown in the Employee Schema exhibit. (Click the Exhibit button.)

The personal identification number is stored in a column named EmployeeSSN. All values in the EmployeeSSN column must be unique.
When importing employee data, you receive the error message shown in the SQL Error exhibit. (Click the Exhibit button.).

You determine that the Transact-SQL statement shown in the Data Load exhibit in the cause of the error. (Click the Exhibit button.)

You remove the constraint on the EmployeeSSN column. You need to ensure that values in the EmployeeSSN column are unique.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

Answer:

Explanation:
With the ANSI standards SQL:92, SQL:1999 and SQL:2003, an UNIQUE constraint must disallow duplicate non-NULL values but accept multiple NULL values.
In the Microsoft world of SQL Server however, a single NULL is allowed but multiple NULLs are not.
From SQL Server 2008, you can define a unique filtered index based on a predicate that excludes NULLs.
References: https://stackoverflow.com/questions/767657/how-do-i-create-a-unique-constraint-that-also-allows-nulls

QUESTION 274
Hotspot Question
Your company has a Microsoft SQL Server data warehouse instance. The human resources department assigns all employees a unique identifier. You plan to store this identifier in a new table named Employee.
You create a new dimension to store information about employees by running the following Transact-SQL statement:

You have not added data to the dimension yet. You need to modify the dimension to implement a new column named [EmployeeKey]. The new column must use unique values.
How should you complete the Transact-SQL statements? To answer, select the appropriate Transact-SQL segments in the answer area.

Answer:

QUESTION 275
Drag and Drop Question
You have a Microsoft SQL Server Integration Services (SSIS) package that loads data into a data warehouse each night from a transactional system. The package also loads data from a set of Comma-Separated Values (CSV) files that are provided by your company’s finance department.
The SSIS package processes each CSV file in a folder. The package reads the file name for the current file into a variable and uses that value to write a log entry to a database table.
You need to debug the package and determine the value of the variable before each file is processed.
Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

Answer:

Explanation:
You debug control flows.
The Foreach Loop container is used for looping through a group of files. Put the breakpoint on it.
The Locals window displays information about the local expressions in the current scope of the Transact-SQL debugger.
References: https://docs.microsoft.com/en-us/sql/integration-services/troubleshooting/debugging-control-flow
http://blog.pragmaticworks.com/looping-through-a-result-set-with-the-foreach-loop

QUESTION 276
Hotspot Question
You create a Microsoft SQL Server Integration Services (SSIS) package as shown in the SSIS Package exhibit. (Click the Exhibit button.)

The package uses data from the Products table and the Prices table. Properties of the Prices source are shown in the OLE DB Source Editor exhibit (Click the Exhibit Button.) and the Advanced Editor for Prices exhibit (Click the Exhibit button.)

You join the Products and Prices tables by using the ReferenceNr column.
You need to resolve the error with the package.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

Answer:

Explanation:
There are two important sort properties that must be set for the source or upstream transformation that supplies data to the Merge and Merge Join transformations:
The Merge Join Transformation requires sorted data for its inputs.
If you do not use a Sort transformation to sort the data, you must set these sort properties manually on the source or the upstream transformation.
References: https://docs.microsoft.com/en-us/sql/integration-services/data-flow/transformations/sort-data-for-the-merge-and-merge-join-transformations

QUESTION 277
Drag and Drop Question
You deploy a Microsoft Server database that contains a staging table named EmailAddress_Import. Each night, a bulk process will import customer information from an external database, cleanse the data, and then insert it into the EmailAddress table. Both tables contain a column named EmailAddressValue that stores the email address.
You need to implement the logic to meet the following requirements:
Email addresses that are present in the EmailAddress_Import table but not in the EmailAddress table must be inserted into the EmailAddress table. Email addresses that are not in the EmailAddress_Import but are present in the EmailAddress table must be deleted from the EmailAddress table.
How should you complete the Transact-SQL statement? To answer, drag the appropriate Transact-SQL segments to the correct locations. Each Transact-SQL segment may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

Answer:

Explanation:
Box 1: EmailAddress
The EmailAddress table is the target.
Box 2: EmailAddress_import
The EmailAddress_import table is the source.
Box 3: NOT MATCHED BY TARGET
Box 4: NOT MATCHED BY SOURCE
References: https://docs.microsoft.com/en-us/sql/t-sql/statements/merge-transact-sql

QUESTION 278
Drag and Drop Question
Note: This question is part of a series of questions that use the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.
You have a Microsoft SQL Server data warehouse instance that supports several client applications.
The data warehouse includes the following tables: Dimension.SalesTerritory, Dimension.Customer, Dimension.Date, Fact.Ticket, and Fact.Order. The Dimension.SalesTerritory and Dimension.Customer tables are frequently updated. The Fact.Order table is optimized for weekly reporting, but the company wants to change it daily. The Fact.Order table is loaded by using an ETL process. Indexes have been added to the table over time, but the presence of these indexes slows data loading.
All data in the data warehouse is stored on a shared SAN. All tables are in a database named DB1. You have a second database named DB2 that contains copies of production data for a development environment. The data warehouse has grown and the cost of storage has increased. Data older than one year is accessed infrequently and is considered historical.
You have the following requirements:
Implement table partitioning to improve the manageability of the data warehouse and to avoid the need to repopulate all transactional data each night. Use a partitioning strategy that is as granular as possible.
Partition the Fact.Order table and retain a total of seven years of data.
Partition the Fact.Ticket table and retain seven years of data. At the end of each month, the partition structure must apply a sliding window strategy to ensure that a new partition is available for the upcoming month, and that the oldest month of data is archived and removed.
Optimize data loading for the Dimension.SalesTerritory, Dimension.Customer, and Dimension.Date tables.
Incrementally load all tables in the database and ensure that all incremental changes are processed.
Maximize the performance during the data loading process for the Fact.Order partition.
Ensure that historical data remains online and available for querying.
Reduce ongoing storage costs while maintaining query performance for current data.
You are not permitted to make changes to the client applications.
You need to implement partitioning for the Fact.Ticket table.
Which three actions should you perform in sequence? To answer, drag the appropriate actions to the correct locations. Each action may be used once, more than once or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: More than one combination of answer choices is correct. You will receive credit for any of the correct combinations you select.

Answer:

Explanation:
From scenario: – Partition the Fact.Ticket table and retain seven years of data. At the end of each month, the partition structure must apply a sliding window strategy to ensure that a new partition is available for the upcoming month, and that the oldest month of data is archived and removed.
The detailed steps for the recurring partition maintenance tasks are:
References: https://docs.microsoft.com/en-us/sql/relational-databases/tables/manage-retention-of-historical-data-in-system-versioned-temporal-tables

QUESTION 279
Drag and Drop Question
Note: This question is part of a series of questions that use the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.
You have a Microsoft SQL Server data warehouse instance that supports several client applications.
The data warehouse includes the following tables: Dimension.SalesTerritory, Dimension.Customer, Dimension.Date, Fact.Ticket, and Fact.Order. The Dimension.SalesTerritory and Dimension.Customer tables are frequently updated. The Fact.Order table is optimized for weekly reporting, but the company wants to change it daily. The Fact.Order table is loaded by using an ETL process. Indexes have been added to the table over time, but the presence of these indexes slows data loading.
All data in the data warehouse is stored on a shared SAN. All tables are in a database named DB1. You have a second database named DB2 that contains copies of production data for a development environment. The data warehouse has grown and the cost of storage has increased. Data older than one year is accessed infrequently and is considered historical.
You have the following requirements:
Implement table partitioning to improve the manageability of the data warehouse and to avoid the need to repopulate all transactional data each night. Use a partitioning strategy that is as granular as possible.
Partition the Fact.Order table and retain a total of seven years of data.
Partition the Fact.Ticket table and retain seven years of data. At the end of each month, the partition structure must apply a sliding window strategy to ensure that a new partition is available for the upcoming month, and that the oldest month of data is archived and removed.
Optimize data loading for the Dimension.SalesTerritory, Dimension.Customer, and Dimension.Date tables.
Incrementally load all tables in the database and ensure that all incremental changes are processed.
Maximize the performance during the data loading process for the Fact.Order partition.
Ensure that historical data remains online and available for querying.
Reduce ongoing storage costs while maintaining query performance for current data.
You are not permitted to make changes to the client applications.
You need to optimize data loading for the Dimension.Customer table.
Which three Transact-SQL segments should you use to develop the solution? To answer, move the appropriate Transact-SQL segments from the list of Transact-SQL segments to the answer area and arrange them in the correct order.
NOTE: You will not need all of the Transact-SQL segments.

Answer:

Explanation:
Step 1: USE DB1
From Scenario: All tables are in a database named DB1. You have a second database named DB2 that contains copies of production data for a development environment.
Step 2: EXEC sys.sp_cdc_enable_db
Before you can enable a table for change data capture, the database must be enabled. To enable the database, use the sys.sp_cdc_enable_db stored procedure.
sys.sp_cdc_enable_db has no parameters.
Step 3: EXEC sys.sp_cdc_enable_table
@source schema = N ‘schema’ etc.
Sys.sp_cdc_enable_table enables change data capture for the specified source table in the current database.
Partial syntax:
sys.sp_cdc_enable_table
[ @source_schema = ] ‘source_schema’,
[ @source_name = ] ‘source_name’ , [,[ @capture_instance = ] ‘capture_instance’ ] [,[ @supports_net_changes = ] supports_net_changes ] Etc.
References: https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sys-sp-cdc-enable-table-transact-sql
https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sys-sp-cdc-enable-db-transact-sql

QUESTION 280
Hotspot Question
You have a Microsoft SQL Server Integration Services (SSIS) package that contains a Data Flow task as shown in the Data Flow exhibit. (Click the Exhibit button.)

You install Data Quality Services (DQS) on the same server that hosts SSIS and deploy a knowledge base to manage customer email addresses. You add a DQS Cleansing transform to the Data Flow as shown in the Cleansing exhibit. (Click the Exhibit button.)

You create a Conditional Split transform as shown in the Splitter exhibit. (Click the Exhibit button.)

You need to split the output of the DQS Cleansing task to obtain only Correct values from the EmailAddress column.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.

Answer:

Explanation:
The DQS Cleansing component takes input records, sends them to a DQS server, and gets them back corrected. The component can output not only the corrected data, but also additional columns that may be useful for you. For example – the status columns. There is one status column for each mapped field, and another one that aggregated the status for the whole record. This record status column can be very useful in some scenarios, especially when records are further processed in different ways depending on their status. Is such cases, it is recommended to use a Conditional Split component below the DQS Cleansing component, and configure it to split the records to groups based on the record status (or based on other columns such as specific field status).
References: https://blogs.msdn.microsoft.com/dqs/2011/07/18/using-the-ssis-dqs-cleansing-component/


!!!RECOMMEND!!!

1.|2018 Latest 70-697 Exam Dumps (PDF & VCE) 287Q&As Download:
https://www.braindump2go.com/70-767.html

2.|2018 Latest 70-697 Study Guide Video:
https://youtu.be/di0FBePt_-w

[2018-March-New]100% Valid 70-767 Exam Questions PDF 287Q Provided by Braindump2go[259-269]

2018 March New Microsoft 70-697 Exam Dumps with PDF and VCE Free Updated Today! Following are some new 70-697 Real Exam Questions:

1.|2018 Latest 70-697 Exam Dumps (PDF & VCE) 287Q&As Download:
https://www.braindump2go.com/70-767.html

2.|2018 Latest 70-697 Exam Questions & Answers Download:
https://drive.google.com/drive/folders/0B75b5xYLjSSNN1RSdlN6Z0VwRjg?usp=sharing

QUESTION 259
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this sections, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have the following line-of-business solutions:
ERP system
Online WebStore
Partner extranet
One or more Microsoft SQL Server instances support each solution. Each solution has its own product catalog. You have an additional server that hosts SQL Server Integration Services (SSIS) and a data warehouse. You populate the data warehouse with data from each of the line-of-business solutions. The data warehouse does not store primary key values from the individual source tables.
The database for each solution has a table named Products that stored product information. The Products table in each database uses a separate and unique key for product records. Each table shares a column named ReferenceNr between the databases. This column is used to create queries that involve more than once solution.
You need to load data from the individual solutions into the data warehouse nightly. The following requirements must be met:
If a change is made to the ReferenceNr column in any of the sources, set the value of IsDisabled to True and create a new row in the Products table.
If a row is deleted in any of the sources, set the value of IsDisabled to True in the data warehouse.
Solution: Perform the following actions:
Enable the Change Tracking for the Product table in the source databases.
Query the cdc.fn_cdc_get_all_changes_capture_dbo_products function from the sources for updated rows.
Set the IsDisabled column to True for rows with the old ReferenceNr value.
Create a new row in the data warehouse Products table with the new ReferenceNr value.
Does the solution meet the goal?

A. Yes
B. No

Answer: B
Explanation:
We must also handle the deleted rows, not just the updated rows.
References: https://solutioncenter.apexsql.com/enable-use-sql-server-change-data-capture/

QUESTION 260
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this sections, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You are developing a Microsoft SQL Server Integration Services (SSIS) projects. The project consists of several packages that load data warehouse tables.
You need to extend the control flow design for each package to use the following control flow while minimizing development efforts and maintenance:

Solution: You add the control flow to an ASP.NET assembly. You add a script task that references this assembly to each data warehouse load package.
Does the solution meet the goal?

A. Yes
B. No

Answer: B
Explanation:
A package consists of a control flow and, optionally, one or more data flows. You create the control flow in a package by using the Control Flow tab in SSIS Designer.
References: https://docs.microsoft.com/en-us/sql/integration-services/control-flow/control-flow

QUESTION 261
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this sections, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a data warehouse that stores information about products, sales, and orders for a manufacturing company. The instance contains a database that has two tables named SalesOrderHeader and SalesOrderDetail. SalesOrderHeader has 500,000 rows and SalesOrderDetail has 3,000,000 rows.
Users report performance degradation when they run the following stored procedure:

You need to optimize performance.
Solution: You run the following Transact-SQL statement:

Does the solution meet the goal?

A. Yes
B. No

Answer: B
Explanation:
Microsoft recommend against specifying 0 PERCENT or 0 ROWS in a CREATE STATISTICS..WITH SAMPLE statement. When 0 PERCENT or ROWS is specified, the statistics object is created but does not contain statistics data.
References: https://docs.microsoft.com/en-us/sql/t-sql/statements/create-statistics-transact-sql

QUESTION 262
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this sections, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You are developing a Microsoft SQL Server Integration Services (SSIS) projects. The project consists of several packages that load data warehouse tables.
You need to extend the control flow design for each package to use the following control flow while minimizing development efforts and maintenance:

Solution: You add the control flow to a control flow package part.
You add an instance of the control flow package part to each data warehouse load package.
Does the solution meet the goal?

A. Yes
B. No

Answer: A
Explanation:
A package consists of a control flow and, optionally, one or more data flows.
You create the control flow in a package by using the Control Flow tab in SSIS Designer.
References: https://docs.microsoft.com/en-us/sql/integration-services/control-flow/control-flow

QUESTION 263
You have a data quality project that focuses on the Products catalog for the company. The data includes a product reference number.
The product reference should use the following format:
Two letters followed by an asterisk and then four or five numbers. An example of a valid number is XX*55522.
Any reference number that does not conform to the format must be rejected during the data cleansing.
You need to add a Data Quality Services (DQS) domain rule in the Products domain.
Which rule should you use?

A. value matches pattern ZA*9876[5]
B. value matches pattern AZ[*]1234[5]
C. value matches regular expression AZ[*]1234[5] D. value matches pattern [a-zA-Z][a-zA-Z]*[0-9][0-9] [0-9][0-9] [0-9]?

Answer: A
Explanation:
For a pattern matching rule:
Any letter (A…Z) can be used as a pattern for any letter; case insensitive Any digit (0…9) can be used as a pattern for any digit Any special character, except a letter or a digit, can be used as a pattern for itself Brackets, [], define optional matching
Example: ABC:0000
This rule implies that the data will contain three parts: any three letters followed by a colon
(:), which is again followed by any four digits.

QUESTION 264
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this sections, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a Microsoft Azure SQL Data Warehouse instance that must be available six months a day for reporting.
You need to pause the compute resources when the instance is not being used.
Solution: You use SQL Server Management Studio (SSMS).
Does the solution meet the goal?

A. Yes
B. No

Answer: B
Explanation:
To pause a SQL Data Warehouse database, use any of these individual methods.
Pause compute with Azure portal
Pause compute with PowerShell
Pause compute with REST APIs
References:
https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-manage-compute-overview

QUESTION 265
Note: This question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.
You are a database administrator for an e-commerce company that runs an online store. The company has the databases described in the following table.

Each day, data from the table OnlineOrder in DB2 must be exported by partition. The tables must not be locked during the process.
You need to write a Microsoft SQL Server Integration Services (SSIS) package that performs the data export.
What should you use?

A. Lookup transformation
B. Merge transformation
C. Merge Join transformation
D. MERGE statement
E. Union All transformation
F. Balanced Data Distributor transformation
G. Sequential container
H. Foreach Loop container

Answer: E
Explanation:
The Union All transformation combines multiple inputs into one output. For example, the outputs from five different Flat File sources can be inputs to the Union All transformation and combined into one output.
References: https://docs.microsoft.com/en-us/sql/integration-services/data-flow/transformations/union-all-transformation

QUESTION 266
Note: This question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.
You are a database administrator for an e-commerce company that runs an online store. The company has the databases described in the following table.

Product prices are updated and are stored in a table named Products on DB1. The Products table is deleted and refreshed each night from MDS by using a Microsoft SQL Server Integration Services (SSIS) package. None of the data sources are sorted.
You need to update the SSIS package to add current prices to the Products table.
What should you use?

A. Lookup transformation
B. Merge transformation
C. Merge Join transformation
D. MERGE statement
E. Union All transformation
F. Balanced Data Distributor transformation
G. Sequential container
H. Foreach Loop container

Answer: D
Explanation:
In the current release of SQL Server Integration Services, the SQL statement in an Execute SQL task can contain a MERGE statement. This MERGE statement enables you to accomplish multiple INSERT, UPDATE, and DELETE operations in a single statement.
References: https://docs.microsoft.com/en-us/sql/integration-services/control-flow/merge-in-integration-services-packages

QUESTION 267
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this sections, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a data warehouse that stores information about products, sales, and orders for a manufacturing company. The instance contains a database that has two tables named SalesOrderHeader and SalesOrderDetail. SalesOrderHeader has 500,000 rows and SalesOrderDetail has 3,000,000 rows.
Users report performance degradation when they run the following stored procedure:

You need to optimize performance.
Solution: You run the following Transact-SQL statement:

Does the solution meet the goal?

A. Yes
B. No

Answer: B
Explanation:
100 out of 500,000 rows is a too small sample size.
References: https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-tables-statistics

QUESTION 268
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this sections, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You are developing a Microsoft SQL Server Integration Services (SSIS) projects. The project consists of several packages that load data warehouse tables.
You need to extend the control flow design for each package to use the following control flow while minimizing development efforts and maintenance:

Solution: You add the control flow to a script task. You add an instance of the script task to the storage account in Microsoft Azure.
Does the solution meet the goal?

A. Yes
B. No

Answer: B
Explanation:
A package consists of a control flow and, optionally, one or more data flows.
You create the control flow in a package by using the Control Flow tab in SSIS Designer.
References: https://docs.microsoft.com/en-us/sql/integration-services/control-flow/control-flow

QUESTION 269
Hotspot Question
You are testing a Microsoft SQL Server Integration Services (SSIS) package. The package includes the Control Flow task shown in the Control Flow exhibit (Click the Exhibit button) and the Data Flow task shown in the Data Flow exhibit. (Click the Exhibit button.)


You declare a variable named Seed as shown in the Variables exhibit. (Click the Exhibit button.) The variable is changed by the Script task during execution.

You need to be able to interrogate the value of the Seed variable after the Script task completes execution.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.

Answer:

Explanation:
https://docs.microsoft.com/en-us/sql/integration-services/variables-window


!!!RECOMMEND!!!

1.|2018 Latest 70-697 Exam Dumps (PDF & VCE) 287Q&As Download:
https://www.braindump2go.com/70-767.html

2.|2018 Latest 70-697 Study Guide Video:
https://youtu.be/di0FBePt_-w

[2018-March-New]Free Microsoft 287Q 70-767 Dumps VCE and PDF Braindump2go Offers[248-258]

2018 March New Microsoft 70-697 Exam Dumps with PDF and VCE Free Updated Today! Following are some new 70-697 Real Exam Questions:

1.|2018 Latest 70-697 Exam Dumps (PDF & VCE) 287Q&As Download:
https://www.braindump2go.com/70-767.html

2.|2018 Latest 70-697 Exam Questions & Answers Download:
https://drive.google.com/drive/folders/0B75b5xYLjSSNN1RSdlN6Z0VwRjg?usp=sharing

QUESTION 248
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this sections, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a Microsoft Azure SQL Data Warehouse instance that must be available six months a day for reporting.
You need to pause the compute resources when the instance is not being used.
Solution: You use the Azure portal.
Does the solution meet the goal?

A. Yes
B. No

Answer: A

QUESTION 249
Note: This question is part of a series of questions that use the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.
You have a Microsoft SQL Server data warehouse instance that supports several client applications.
The data warehouse includes the following tables: Dimension.SalesTerritory, Dimension.Customer, Dimension.Date, Fact.Ticket, and Fact.Order.
The Dimension.SalesTerritory and Dimension.Customer tables are frequently updated.
The Fact.Order table is optimized for weekly reporting, but the company wants to change it daily. The Fact.Order table is loaded by using an ETL process. Indexes have been added to the table over time, but the presence of these indexes slows data loading.
All data in the data warehouse is stored on a shared SAN. All tables are in a database named DB1. You have a second database named DB2 that contains copies of production data for a development environment. The data warehouse has grown and the cost of storage has increased. Data older than one year is accessed infrequently and is considered historical.
You have the following requirements:
You are not permitted to make changes to the client applications.
You need to optimize the storage for the data warehouse.
What change should you make?

A. Partition the Fact.Order table, and move historical data to new filegroups on lower-cost storage.
B. Create new tables on lower-cost storage, move the historical data to the new tables, and then shrink the database.
C. Remove the historical data from the database to leave available space for new data.
D. Move historical data to new tables on lower-cost storage.

Answer: A
Explanation:
Create the load staging table in the same filegroup as the partition you are loading. Create the unload staging table in the same filegroup as the partition you are deleteing.
From scenario: Data older than one year is accessed infrequently and is considered historical.
References: https://blogs.msdn.microsoft.com/sqlcat/2013/09/16/top-10-best-practices-for-building-a-large-scale-relational-data-warehouse/

QUESTION 250
You have a Microsoft SQL Server Integration Services (SSIS) package that includes the control flow shown in the following diagram.

You need to choose the enumerator for the Foreach Loop container.
Which enumerator should you use?

A. Foreach SMO Enumerator
B. Foreach Azure Blob Enumerator
C. Foreach NodeList Enumerator
D. Foreach ADO Enumerator

Answer: D
Explanation:
Use the Foreach ADO enumerator to enumerate rows in tables. For example, you can get the rows in an ADO recordset.

QUESTION 251
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this sections, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have the following line-of-business solutions:
ERP system
Online WebStore
Partner extranet
One or more Microsoft SQL Server instances support each solution. Each solution has its own product catalog. You have an additional server that hosts SQL Server Integration Services (SSIS) and a data warehouse. You populate the data warehouse with data from each of the line-of-business solutions. The data warehouse does not store primary key values from the individual source tables.
The database for each solution has a table named Products that stored product information. The Products table in each database uses a separate and unique key for product records. Each table shares a column named ReferenceNr between the databases. This column is used to create queries that involve more than once solution.
You need to load data from the individual solutions into the data warehouse nightly. The following requirements must be met:
If a change is made to the ReferenceNr column in any of the sources, set the value of IsDisabled to True and create a new row in the Products table.
If a row is deleted in any of the sources, set the value of IsDisabled to True in the data warehouse.
Solution: Perform the following actions:
Enable the Change Tracking feature for the Products table in the three source databases.
Query the CHANGETABLE function from the sources for the deleted rows.
Set the IsDIsabled column to True on the data warehouse Products table for the listed rows.
Does the solution meet the goal?

A. Yes
B. No

Answer: B
Explanation:
We must check for updated rows, not just deleted rows.
References: https://www.timmitchell.net/post/2016/01/18/getting-started-with-change-tracking-in-sql-server/

QUESTION 252
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this sections, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a data warehouse that stores information about products, sales, and orders for a manufacturing company. The instance contains a database that has two tables named SalesOrderHeader and SalesOrderDetail. SalesOrderHeader has 500,000 rows and SalesOrderDetail has 3,000,000 rows.
Users report performance degradation when they run the following stored procedure:

You need to optimize performance.
Solution: You run the following Transact-SQL statement:

Does the solution meet the goal?

A. Yes
B. No

Answer: A
Explanation:
You can specify the sample size as a percent. A 5% statistics sample size would be helpful.
References: https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-tables-statistics

QUESTION 253
Note: This question is part of a series of questions that use the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.
You have a Microsoft SQL Server data warehouse instance that supports several client applications.
The data warehouse includes the following tables: Dimension.SalesTerritory, Dimension.Customer, Dimension.Date, Fact.Ticket, and Fact.Order. The Dimension.SalesTerritory and Dimension.Customer tables are frequently updated. The Fact.Order table is optimized for weekly reporting, but the company wants to change it daily. The Fact.Order table is loaded by using an ETL process. Indexes have been added to the table over time, but the presence of these indexes slows data loading.
All data in the data warehouse is stored on a shared SAN. All tables are in a database named DB1. You have a second database named DB2 that contains copies of production data for a development environment. The data warehouse has grown and the cost of storage has increased. Data older than one year is accessed infrequently and is considered historical.
You have the following requirements:
Implement table partitioning to improve the manageability of the data warehouse and to avoid the need to repopulate all transactional data each night. Use a partitioning strategy that is as granular as possible.
Partition the Fact.Order table and retain a total of seven years of data.
Partition the Fact.Ticket table and retain seven years of data. At the end of each month, the partition structure must apply a sliding window strategy to ensure that a new partition is available for the upcoming month, and that the oldest month of data is archived and removed.
Optimize data loading for the Dimension.SalesTerritory, Dimension.Customer, and Dimension.Date tables.
Incrementally load all tables in the database and ensure that all incremental changes are processed.
Maximize the performance during the data loading process for the Fact.Order partition.
Ensure that historical data remains online and available for querying.
Reduce ongoing storage costs while maintaining query performance for current data.
You are not permitted to make changes to the client applications.
You need to implement the data partitioning strategy.
How should you partition the Fact.Order table?

A. Create 17,520 partitions.
B. Use a granularity of two days.
C. Create 2,557 partitions.
D. Create 730 partitions.

Answer: C
Explanation:
We create on partition for each day. 7 years times 365 days is 2,555. Make that 2,557 to provide for leap years.
From scenario: Partition the Fact.Order table and retain a total of seven years of data. Maximize the performance during the data loading process for the Fact.Order partition.

QUESTION 254
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this sections, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a Microsoft Azure SQL Data Warehouse instance that must be available six months a day for reporting.
You need to pause the compute resources when the instance is not being used.
Solution: You use SQL Server Configuration Manager.
Does the solution meet the goal?

A. Yes
B. No

Answer: B
Explanation:
To pause a SQL Data Warehouse database, use any of these individual methods.
Pause compute with Azure portal
Pause compute with PowerShell
Pause compute with REST APIs
References:
https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-manage-compute-overview

QUESTION 255
Note: This question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.
You are a database administrator for an e-commerce company that runs an online store.
The company has three databases as described in the following table.

You plan to load at least one million rows of data each night from DB1 into the OnlineOrder table. You must load data into the correct partitions using a parallel process.
You create 24 Data Flow tasks. You must place the tasks into a component to allow parallel load. After all of the load processes compete, the process must proceed to the next task.
You need to load the data for the OnlineOrder table.
What should you use?

A. Lookup transformation
B. Merge transformation
C. Merge Join transformation
D. MERGE statement
E. Union All transformation
F. Balanced Data Distributor transformation
G. Sequential container
H. Foreach Loop container

Answer: H
Explanation:
The Parallel Loop Task is an SSIS Control Flow task, which can execute multiple iterations of the standard Foreach Loop Container concurrently.
References: http://www.cozyroc.com/ssis/parallel-loop-task

QUESTION 256
Note: This question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.
You are a database administrator for an e-commerce company that runs an online store.
The company has the databases described in the following table.

Each week, you import a product catalog from a partner company to a staging table in DB2.
You need to create a stored procedure that will update the staging table by inserting new products and deleting discontinued products.
What should you use?

A. Lookup transformation
B. Merge transformation
C. Merge Join transformation
D. MERGE statement
E. Union All transformation
F. Balanced Data Distributor transformation
G. Sequential container
H. Foreach Loop container

Answer: G

QUESTION 257
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this sections, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have the following line-of-business solutions:
ERP system
Online WebStore
Partner extranet
One or more Microsoft SQL Server instances support each solution. Each solution has its own product catalog. You have an additional server that hosts SQL Server Integration Services (SSIS) and a data warehouse. You populate the data warehouse with data from each of the line-of-business solutions. The data warehouse does not store primary key values from the individual source tables.
The database for each solution has a table named Products that stored product information. The Products table in each database uses a separate and unique key for product records. Each table shares a column named ReferenceNr between the databases. This column is used to create queries that involve more than once solution.
You need to load data from the individual solutions into the data warehouse nightly.
The following requirements must be met:
If a change is made to the ReferenceNr column in any of the sources, set the value of IsDisabled to True and create a new row in the Products table.
If a row is deleted in any of the sources, set the value of IsDisabled to True in the data warehouse.
Solution: Perform the following actions:
Enable the Change Tracking for the Product table in the source databases.
Query the CHANGETABLE function from the sources for the updated rows.
Set the IsDisabled column to True for the listed rows that have the old ReferenceNr value.
Create a new row in the data warehouse Products table with the new ReferenceNr value.
Does the solution meet the goal?

A. Yes
B. No

Answer: B
Explanation:
We must check for deleted rows, not just updates rows.
References: https://www.timmitchell.net/post/2016/01/18/getting-started-with-change-tracking-in-sql-server/

QUESTION 258
Note: This question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.
You are a database administrator for an e-commerce company that runs an online store. The company has the databases described in the following table.

Each day, you publish a Microsoft Excel workbook that contains a list of product names and current prices to an external website. Suppliers update pricing information in the workbook. Each supplier saves the workbook with a unique name.
Each night, the Products table is deleted and refreshed from MDS by using a Microsoft SQL Server Integration Services (SSIS) package. All files must be loaded in sequence.
You need to add a data flow in an SSIS package to perform the Excel files import in the data warehouse.
What should you use?

A. Lookup transformation
B. Merge transformation
C. Merge Join transformation
D. MERGE statement
E. Union All transformation
F. Balanced Data Distributor transformation
G. Sequential container
H. Foreach Loop container

Answer: A
Explanation:
If you’re familiar with SSIS and don’t want to run the SQL Server Import and Export Wizard, create an SSIS package that uses the Excel Source and the SQL Server Destination in the data flow.

References: https://docs.microsoft.com/en-us/sql/integration-services/import-export-data/import-data-from-excel-to-sql


!!!RECOMMEND!!!
1.|2018 Latest 70-697 Exam Dumps (PDF & VCE) 287Q&As Download:
https://www.braindump2go.com/70-767.html

2.|2018 Latest 70-697 Study Guide Video:
https://youtu.be/di0FBePt_-w

[2018-March-New]100% Success-Braindump2go 70-764 PDF and 70-764 VCE Dumps 332Q Instant Download[172-182]

2018 March New Microsoft 70-764 Exam Dumps with PDF and VCE Free Updated Today! Following are some new 70-764 Real Exam Questions:

(more…)

[2018-March-New]Exam Pass 100%!Braindump2go 70-764 Exam VCE and PDF 332Q Instant Download[161-171]

2018 March New Microsoft 70-764 Exam Dumps with PDF and VCE Free Updated Today! Following are some new 70-764 Real Exam Questions:

(more…)

[March-2018]100% Real Exam Questions-Braindump2go 210-250 Dumps VCE and PDF 111Q Download[73-83]

2018 March Latest Cisco 210-250 Exam Dumps with PDF and VCE Free Updated Today! Following are some new 210-250 Real Exam Questions:

(more…)

Pages: 1 2 3 4 5 6 7 ... 143 144