Lombok

@Data

@Data is a convenient shortcut annotation that bundles the features of @ToString@EqualsAndHashCode@Getter / @Setter

and @RequiredArgsConstructor together: In other words, @Data generates all the boilerplate that is normally associated with simple POJOs (Plain Old Java Objects) and beans: getters for all fields, setters for all non-final fields, and appropriate toStringequals and hashCode implementations that involve the fields of the class, and a constructor that initializes all final fields, as well as all non-final fields with no initializer that have been marked with @NonNull, in order to ensure the field is never null.

@Value

@Value is the immutable variant of @Data; all fields are made private and final by default, and setters are not generated. The class itself is also made final by default, because immutability is not something that can be forced onto a subclass. Like @Data, useful toString()equals() and hashCode() methods are also generated, each field gets a getter method, and a constructor that covers every argument (except final fields that are initialized in the field declaration) is also generated.

 

Resolving Git Issue: RPC failed: curl 56 Recv failed: Connection was aborted

Came across an interesting issue on Gitus today. I tried to clone a relatively large Repo and got the following error:
RPC failed: curl 56 Recv failed: Connection was aborted
The remote end hung up unexpectedly while git cloning
Turns out, I needed to increase the HTTP PostBuffer size to pull a repo this large:
git config --global http.postBuffer 157286400
The above command will increase the postBuffer size to 150MB (the size of the repo I was trying to download)
You can check your local http.postBuffer size by running the following command:
git config --get http.postBuffer
If this command returns nothing, you’re using the default post buffer size (1MB)

How to Generate Scripts from Data in SQL Server DB

SQL Server has a neat feature that allows you to output the contents of a Table as INSERT statements so you can insert it into another DB. This is a very useful feature if you want to test against production data on your own DB without pulling a hefty or length DB Backup.

The process is as follows.

  • Right click the DB you want to generate a script of INSERT statements from
  • Click Tasks -> Generate Scripts…
  • In the window that opens, click “Choose Objects” and tick the DB table whose data you want to generate INSERT statements from
  • Then, in “Set Scripting Options” click the Advanced button and scroll down to the last item under the “General” heading. This item is called “Types of Data to Script”. Here, change “Schema only” to “Data only”
  • Finally, in the the “Set Scripting Options”, chose to “Save to a new query window” and Finish!

Voilà!! A new query window opens with a list of INSERT statements to generate the data exactly as it was in the original DB.

Reference: https://dzone.com/articles/generate-database-scripts-with-data-in-sql-server

Index Physical Stats Analysis (sys.dm_db_index_physical_stats)

The following DMV’s (Dynamic Management Views) can be extremely useful for both analyzing existing indexes and identifying new indexes.

Overview and Usage

Is a Dynamic Management Function that returns fragmentation details of Indexes on tables in a particular database. This function takes the following parameters:

sys.dm_db_index_physical_stats ( 
    { database_id | NULL | 0 | DEFAULT }, 
    { object_id | NULL | 0 | DEFAULT }, 
    { index_id | NULL | 0 | -1 | DEFAULT },  
    { partition_number | NULL | 0 | DEFAULT }, 
    { mode | NULL | DEFAULT } )
  • Database_ID: The database we want to turn physical stats for. Leave this as NULL to return stats for all DBs on a DB Server. The build in function DB_ID() can be passed instead of NULL to return info for just the current DB (If you specify NULL for database_id, you must also specify NULL for object_idindex_id, and partition_number)
  • Object_ID: Is the object ID of the Table or View an Index is related to (If you specify NULL for object_id, you must also specify NULL for index_id and partition_number.)
  • Index_ID: Is the Index ID (If you specify NULL for index_id, you must also specify NULL for partition_number)
  • Partition_Number: Is the partition number in the object
  • Mode: Mode specifies the scan level that is used to obtain statistics. Valid inputs are DEFAULT, NULL, LIMITED, SAMPLED, or DETAILED. The default (NULL) is LIMITED
SELECT object_name(IPS.object_id) AS [TableName], 
SI.name AS [IndexName], 
IPS.Index_type_desc, 
IPS.avg_fragmentation_in_percent, 
IPS.avg_fragment_size_in_pages, 
IPS.avg_page_space_used_in_percent, 
IPS.record_count, 
IPS.ghost_record_count,
IPS.fragment_count, 
IPS.avg_fragment_size_in_pages
FROM sys.dm_db_index_physical_stats(db_id(N'RCOMDev'), NULL, NULL, NULL , 'DETAILED') IPS
JOIN sys.tables ST WITH (nolock) ON IPS.object_id = ST.object_id
JOIN sys.indexes SI WITH (nolock) ON IPS.object_id = SI.object_id AND IPS.index_id = SI.index_id
WHERE ST.is_ms_shipped = 0

The above query returns a Fragementation percentage as well as Fragmentation size in pages. The above function call can be modified to analyse details of indexes on a particular object like follows:

FROM sys.dm_db_index_physical_stats(db_id(N'RCOMDev'), OBJECT_ID(N'dbo.Drivers'), NULL, NULL , 'DETAILED') IPS

Where the Database is RCOMDev and the Object is the Drivers table in the dbo Namespace.

Explanation of Columns Returned

  • avg_fragmentation_in_percent – Indicates the average fragmentation of data across all pages in the Index. Ideally, this figure should be as low as possible
  • avg_page_space_used_in_percent – Returns the average density of the pages in an index. Ideally, this figure should be as close to 100% as possible.
  • fragment_count – The lower the number of fragements, the greater the indication that the data is stored consistently
  • avg_fragment_size_in_pages – Larger fragment size is better as it means less Disk I/O is involved to read the same number of pages

 

Scanning Modes Explained

  • LIMITED – Fastest mode that scans the lowest number of pages. Only scans parent level pages
  • SAMPLED – Slower than LIMITED as it also scans Leaf pages, but just scans 1% of all pages, thus values are merely approximate. Great for giving a ball park of Parent and Leaf level page statistics without running DETAILED
  • DETAILED – Scans all Parents and Leaf Pages. Essentially does a LIMITED mode scan for Parent pages and then scans the Leaf pages of all parents.

Identity Columns and Identity Insert

An Identity column is an auto incrementing integer column that can be used to both uniquely identify rows in a table as well as order rows by insertion order.

The Identity column cannot be inserted to automatically, but by turning IDENTITY_INSERT ON for a table, we can! It also cannot be updated, even with IDENTITY_INSERT ON. A good work around here is to turn IDENTITY_INSERT ON, Delete the record who’s Identity column value you want to change and Insert back to the table with the new Identity column value.

To declare a column as an Identity column:

CREATE TABLE Person
(
   [ID] INT IDENTITY (1,2),
   [Name] NVARCHAR(MAX)
)

The first property of the Identity column is the seed, or starting number, and the second value is the increment. Above, the Identity column will start at 1 and increment by 2 every time a record is Inserted.

Also, deleting records from the table will not change the Identity column next increment value. To change this value, if it’s got out of whack, you can run the following:

DBCC CHECKIDENT ('<TABLE_NAME>');

To just check the Identity value, and not even update it if it’s incorrect, you can use the following:

DBCC CHECKIDENT ('<TABLE_NAME>', NORESEED);

The Identity column next value can be forceably reseeded to a specific value by running the following SQL command (where 10 is the value we want to reseed to):

DBCC CHECKIDENT ('<TABLE_NAME>', RESEED, 10);

So, consider the scenario where you delete all records from a table – the Identity column will continue from where it last left off. If you would like to delete all rows and reset the Identity column back to the start:

TRUNCATE TABLE <TABLE_NAME>
GO 
DBCC CHECKIDENT ('<TABLE_NAME>', RESEED, 1); 
GO

Linked SQL Servers and Querying across DBs

To add a Linked SQL Server to your current SQL Server, run the following command:

exec sp_addlinkedserver    @server='<SERVER_ADDRESS>'

You can then confirm that the Server has been linked by running the following Proc:

exec sp_linkedservers

The above proc simply lists the Linked SQL Servers to your current instance. Note that linking a SQL Server is not bi-directional – Linking Server B to Server A does not mean you can query DBs on Server A from Server B

Once the linking is completed, you can run queries on DBs on the linked server using the following format:

[<SERVER_ADDRESS>].DatabaseName.dbo.TableName

Linked Servers can be very useful, but there are a few limitations of which to be aware before implementing them.  Linked Servers cannot be used to call table valued or scalar functions.  Also, the truncate command is not allowed to be called using a Linked Server

To remove the Linked Server when you’re finished, run the following command:

exec sp_dropserver '<SERVER_ADDRESS>'

Querying across DBs on the same SQL Server instance is easy, you can just reference them by DB Name

SELECT * FROM DatabaseName.dbo.TableName

Intersect and Except in SQL Server

The following examples show using the INTERSECT and EXCEPT operators.

The following query returns any distinct values that are returned by both the query on the left and right sides of the INTERSECT operator.

SELECT ProductID FROM Production.Product 
INTERSECT SELECT ProductID FROM Production.WorkOrder;

The following query returns any distinct values from the query left of the EXCEPT operator that aren’t also found on the right query.

SELECT ProductID FROM Production.Product 
EXCEPT SELECT ProductID FROM Production.WorkOrder ;

Different ways of clearing SQL Server Query Cache

Remove all elements from the plan cache for the entire instance

DBCC FREEPROCCACHE;

Use this to clear the plan cache carefully. Freeing the plan cache causes, for example, a stored procedure to be recompiled instead of reused from the cache. This can cause a sudden, temporary decrease in query performance.

Flush the plan cache for the entire instance and suppress the regular completion message

DBCC FREEPROCCACHE WITH NO_INFOMSGS;

Supresses the output “”DBCC execution completed. If DBCC printed error messages, contact your system administrator.”

Flush the ad hoc and prepared plan cache for the entire instance

DBCC FREESYSTEMCACHE ('SQL Plans');

Flush the ad hoc and prepared plan cache for one resource pool

DBCC FREESYSTEMCACHE ('SQL Plans', 'LimitedIOPool');

Flush the entire plan cache for one resource pool

DBCC FREEPROCCACHE ('LimitedIOPool');

Remove all elements from the plan cache for one database (does not work in SQL Azure)

-- Get DBID from one database name first
DECLARE @intDBID INT;
SET @intDBID = (SELECT [dbid] 
                FROM master.dbo.sysdatabases 
                WHERE name = N'AdventureWorks2014');

DBCC FLUSHPROCINDB (@intDBID);

Clear plan cache for the current database

USE AdventureWorks2014;
GO
-- New in SQL Server 2016 and SQL Azure
ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;

Remove one query plan from the cache

SELECT cp.*, cp.plan_handle, cp.objtype, cp.usecounts, st.*, OBJECT_NAME (st.objectid),
DB_NAME(st.dbid) AS [DatabaseName]
FROM sys.dm_exec_cached_plans AS cp CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS st 
WHERE OBJECT_NAME (st.objectid)
LIKE N'%Core__DropIndex%' OPTION (RECOMPILE);
DBCC FREEPROCCACHE (0x0500080021F7AA2C801936C20000000001000000000000000000000000000000000000000000000000000000);

 

https://stackoverflow.com/questions/1873025/how-can-i-clear-the-sql-server-query-cache