看了《时间贫困》这本书,发现对我来说很有用,确实有很多目标,很多想法,但是无法把我的年度目标,里程碑和我每天的日程统一起来,也不知道我每周的时间安排能否真的让我按时完成我的任务。或者说,我并不清楚我每天时间都花在了哪里,这样子就会导致我对于自己的以及对于自己的时间的掌控感很弱很弱。
解决的行为们:
时间确实是海绵,总是能调整开的。只要知道时间花在了什么方面,总是有一些可以调整的空间。无非是优先级的改变,到底在你当下的生活当中,什么更为重要呢~
加油
]]>Micrometer add @MeterTag support on 2023, we are integrating micrometer with datadog in a spring boot project, this blog document how we set @MeterTag with @Timed
@Timed
with some dynamic value, e.g, sth parameters in the input. We could leverage on valueResolver, and use some expression language. For our case, Spel will be totally fine.Import related dependencies, in gradle kotlin, it would contains snippets as below
implementation("io.micrometer:micrometer-registry-statsd:1.11.4")implementation("io.micrometer:micrometer-core:1.11.4")implementation("io.micrometer:micrometer-commons:1.11.4")implementation("org.springframework:spring-aspects")
In application class, enable the AspectJAutoProxy
@SpringBootApplication@Import(HttpSecurityConfig::class)@EnableScheduling@EnableIntegrationManagement@EnableAspectJAutoProxy(proxyTargetClass = true)@Modulithicclass Applicationfun main(args: Array<String>) { runApplication<Application>(args = args)}
For the annotation you want to use, we need to define the corresponding aspect bean. As we want to enhance the @Timed
with @MeterTag
we need to set the meterTagAnnotationHandler
@Configurationclass MetricsConfig { @Bean fun timedAspect(meterRegistry: MeterRegistry): TimedAspect { val timedAspect = TimedAspect(meterRegistry) timedAspect.setMeterTagAnnotationHandler( MeterTagAnnotationHandler( { ValueResolver { p -> p.toString() } }, { CachedSpelValueExpressionResolver() }, ), ) return timedAspect }}
We mainly leverage on CachedSpelValueExpressionResolver to pass in an expression following spring expression language, the resolver is defined as follow
open class SpelValueExpressionResolver : ValueExpressionResolver { private val log = KotlinLogging.logger {} override fun resolve(expression: String, parameter: Any): String { try { val context = SimpleEvaluationContext.forReadOnlyDataBinding().withInstanceMethods().build() return parseExpression(expression).getValue(context, parameter, String::class.java) ?: "" } catch (ex: Exception) { log.error("Exception occurred while trying to evaluate the SpEL expression [$expression]", ex) } return parameter.toString() } open fun parseExpression(expression: String): Expression = SpelExpressionParser().parseExpression(expression)}class CachedSpelValueExpressionResolver : SpelValueExpressionResolver() { private val expressionsCache: MutableMap<String, Expression> = ConcurrentHashMap() override fun parseExpression(expression: String): Expression = expressionsCache.computeIfAbsent(expression) { super.parseExpression(expression) }}
After that, we could add metrics as we need:
@DgsData(parentType = DgsConstants.QUERY.TYPE_NAME, field = DgsConstants.QUERY.HelloWorldPing)@Timedfun helloWorld(@MeterTag(key="message.size", expression="this.size")message: String): String = message
For local testing, as datadog send out metrics in UDP, we could write some script with a UDP socket server to print out the content
const dgram = require("dgram");const port = process.argv[2] || 8125;const socket = dgram.createSocket({ type: "udp4", reuseAddr: true });socket.on("message", msg => console.log(msg.toString()));socket.on("error", error => { console.log("error", error); socket.close();});socket.bind(port);
You should be able to see metrics printed out successfully with tags you want then!
Faced one interesting issue when using bazel, the background is we have multiple repository to host code — SOA. For sharing, then we need to export the library and publish to company wide artifactory.
We use bazel mainly because we have code on different languages, like RoR, Java, Kotlin, Scala, Clojure, etc. Bazel makes it easier to manage it cross different language in a monorepo.
For the export library purpose, we heavily rely on the rules_jvm_external , which has built-in java_export commend we could leverage on directly.
def java_export( name, maven_coordinates, deploy_env = [], excluded_workspaces = {name: None for name in DEFAULT_EXCLUDED_WORKSPACES}, pom_template = None, visibility = None, tags = [], testonly = None, **kwargs)
Basically we could follow the pattern here to define our srcs, deps, maven_coordinates, and then bazel could help us export it and publish to selected artifactory.
On the other end, we have a service built on kotlin with gradle. When we implement the library, we found inside the jar, besides the code in the export places, it also contain directory from com.google.protobuf.*
, and unfortunately the protobuf package version in the lib mismatch with what we use in our own service, as the version in the lib is lower, it breaks the compilation of our code base, especially when we have couple extension functions in protobuf to enrich the conversion, whereas the old protobuf version does not support.
So basically we need to find a way to not include undesired directory inside the jar. In our case, remove those from com.google.protobuf.*
https://bazel.build/reference/be/java#java_library
Add those into runtime_deps, then the export jar would not contain the directory from the pkg. It solves our problem perfect.
]]>In our business use case, we need to create a proxy server to redirect traffic from a specific region. To achieve this, we leverage on aws lambda + api gateway with proper CORs setting, and allowed methods.
At first, we did this in aws console directly. It works fine, however, it would be more maintainable if we could achieve infra as code, benefits would be:
Per the guide, inside Amazon, new features would be first developed in typescript, then leverage on some parsing tool to translate to different languages. Thus in this MVP, I’ll also use typescript.
Prerequisite: You need to at least have permission to the corresponding aws account to move forward
cdk diff
and cdk synth
cdk deploy
to deploy aws resourcescdk destroy
or just manually delete the corresponding stack from the CloudFormation console pageexport class DemoStack extends cdk.Stack { constructor(scope: Construct, id: string, props?: cdk.StackProps) { super(scope, id, props); const handler = new lambda.Function(this, "handler", { runtime: lambda.Runtime.NODEJS_16_X, code: lambda.Code.fromAsset("resource"), handler: "index.handler", timeout: cdk.Duration.seconds(30), }); const lambdaIntegration = new HttpLambdaIntegration( "LambdaIntegration", handler ); const httpApi = new apigatewayv2.HttpApi(this, "test", { apiName: "api-gateway", corsPreflight: { allowOrigins: [ "https://llchen60.com", ], allowHeaders: ["paul",], allowMethods: [apigatewayv2.CorsHttpMethod.GET], }, createDefaultStage: false, }); httpApi.addStage("prod", { stageName: "prod", autoDeploy: true, }); httpApi.addRoutes({ path: "/llchen/{path+}", methods: [apigatewayv2.HttpMethod.GET], integration: lambdaIntegration, }); }}const app = new cdk.App();new DemoStack(app, "DemoStack");
cdk diff
to check local changescdk synth
to generate CloudFormation filecdk deploy
- deploy to CloudFormation// install cdk globally npm install -g aws-cdk// configure access token, secret, account id, preferred region hereaws configuremkdir cdkAppDictName cdk init app --language typescript// check command guide for reference https://docs.aws.amazon.com/cdk/v2/guide/cli.html // list stacks in the app cdk list // synthesize the prints the cloudformation template cdk synth // bootstrap CDK toolkit stack cdk bootstrap cdk destroy // Compares the specified stack and its dependencies with the deployed stacks or a local CloudFormation templatecdk diff // Deploys one or more specified stackscdk deploy
.github/workflows
.github/workflows
, define what events should trigger the workflowhttps://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#run-name
name: "cdk deploy"on: push: branches: - masterjobs: aws_cdk: runs-on: "your runner" steps: - name: Post to a Slack Channel Before Deployment id: slack-before-deploy uses: slackapi/slack-github-action@v1.23.0 with: # Slack channel id, channel name, or user id to post message. # See also: https://api.slack.com/methods/chat.postMessage#channels channel-id: "channel-id" # For posting a rich message using Block Kit payload: | { "blocks": [ { "type": "section", "text": { "type": "mrkdwn", "text": "*Hello World*" } } ] } env: SLACK_BOT_TOKEN: ${{ secrets.SLACK_BOT_TOKEN }} - name: Checkout repo uses: actions/checkout@v3 - uses: actions/setup-node@v2 with: node-version: "14" - name: Configure aws credentials uses: aws-actions/configure-aws-credentials@master with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY }} aws-secret-access-key: ${{ secrets.AWS_SECRET_KEY }} aws-region: us-west-1 - name: Install dependencies run: npm install -g yarn && yarn - name: Synth stack run: yarn cdk synth - name: Deploy stack run: yarn cdk deploy --all - name: Post to a Slack Channel Post Deployment id: slack-after-deploy-success if: ${{ success() }} uses: slackapi/slack-github-action@v1.23.0 with: # Slack channel id, channel name, or user id to post message. # See also: https://api.slack.com/methods/chat.postMessage#channels channel-id: "channel-id" # For posting a rich message using Block Kit payload: | { "blocks": [ { "type": "section", "text": { "type": "mrkdwn", "text": "text msg" } } ] } env: SLACK_BOT_TOKEN: ${{ secrets.SLACK_BOT_TOKEN }} - name: Post to a Slack Channel Post Deployment id: slack-after-deploy-failure if: ${{ failure() }} uses: slackapi/slack-github-action@v1.23.0 with: # Slack channel id, channel name, or user id to post message. # See also: https://api.slack.com/methods/chat.postMessage#channels channel-id: "channel-id" # For posting a rich message using Block Kit payload: | { "blocks": [ { "type": "section", "text": { "type": "mrkdwn", "text": "test msg" } } ] } env: SLACK_BOT_TOKEN: ${{ secrets.SLACK_BOT_TOKEN }}
That’s literally the whole process for us to integrate CDK with github actions, let me know if you have any questions! :)
]]>The singleton Pattern ensures a class has only one instance, and provides a global point of access to it.
public class Singleton { private static Singleton uniqueInstance; private Singleton() {} public static Singleton getInstance() { if (uniqueInstance == null) { uniqueInstance = new Singleton(); } return uniqueInstance; }}
Thread A
if (uniqueInstance == null) { // return true
new Singleton()
Thread B
if (uniqueInstance == null) { // return true
new Singleton()
public class Singleton { private static Singleton uniqueInstance; private Singleton() {} public static synchronized Singleton getInstance() { if (uniqueInstance == null) { uniqueInstance = new Singleton(); } return uniqueInstance; }}
public class Singleton { private static Singleton uniqueInstance = new Singleton(); private Singleton() {} public static synchronized Singleton getInstance() { return uniqueInstance; }}
public class Singleton { private volatile static Singleton uniqueInstance; private Singleton() {} public static synchronized Singleton getInstance() { if (uniqueInstance == null) { synchronized (Singleton.class) { if (uniqueInstance == null) { uniqueInstance = new Singleton(); } } } return uniqueInstance; }}
public enum Singleton { UNIQUE_INSTANCE;}public class SingletonClient { Singleton singleton = Singleton.UNIQUE_INSTANCE;}
ClassLoader
type
application class loader
extension class loader
bootstrap class loader
how does it work
delegation model
Define an interface for creating an object, but let’s subclasses decide which class to instantiate. Factory method lets a class defer instantiation to subclasses.
public abstract class Animal { String name; public abstract void bark();}public interface IAnimalFactory { Animal createAnimal(String type);}
public class Cat extends Animal { public Cat(String name) { this.name = name; } @Override public void bark() { System.out.println("miao"); }}public class Dog extends Animal { public Dog(String name) { this.name = name; } @Override public void bark() { System.out.println("wang"); }}public class RandomFactory implements IAnimalFactory { @Override public Animal createAnimal(String name) { Random random = new Random(); int num = random.nextInt(2); var list = Arrays.asList("cat", "dog"); switch(list.get(num)) { case "cat": return new Cat(name); case "dog": return new Dog(name); default: return null; } }}
@GetMapping("/factory") public String factory() { RandomFactory randomFactory = new RandomFactory(); randomFactory.createAnimal("test").bark(); randomFactory.createAnimal("test").bark(); randomFactory.createAnimal("test").bark(); randomFactory.createAnimal("test").bark(); randomFactory.createAnimal("test").bark(); return "check the log"; }
Using composition rather than inheritance
Strategy patten define a set of algorithms, encapsulate each of them and make then exchangable. And we could switch them at run time , decouple the algorithm with the place they are used
public interface IFlyBehavior { void fly();}public interface IQuackBehavior { void quack();}
public class Duck { IFlyBehavior flyBehavior; IQuackBehavior quackBehavior; public void fly() { flyBehavior.fly(); } public void quack() { quackBehavior.quack(); }}public class JetFly implements IFlyBehavior { @Override public void fly() { System.out.println("Jet Fly"); }}public class SimplyFly implements IFlyBehavior { @Override public void fly() { System.out.println("Simple flying"); }}public class LoudQuack implements IQuackBehavior { @Override public void quack() { System.out.println("Make noise!"); }}public class NoQuack implements IQuackBehavior { @Override public void quack() { // do nothing, as no quack exist }}public class ToyDuck extends Duck { public ToyDuck(IFlyBehavior flyBehavior, IQuackBehavior quackBehavior) { this.flyBehavior = flyBehavior; this.quackBehavior = quackBehavior; }}
@GetMapping("/strategy") public String strategy() { Duck testDuck = new ToyDuck(new JetFly(), new LoudQuack()); testDuck.fly(); testDuck.quack(); return "check the log"; }
]]>One highlight
the condimentDecorator is a beverage, and also need to has a beverage
we are using inheritance to achieve the type matching, but we aren’t using inheritance to get behavior
public abstract class Beverage { String description = "Unknown Beverage"; public abstract double cost(); public String getDescription() { return description; }}public abstract class CondimentDecorator extends Beverage{ Beverage beverage; public abstract String getDescription();}
public class DecafCoffee extends Beverage { @Override public double cost() { return 2.5f; } String description() { return "Decat coffee"; }}public class Espresso extends Beverage{ public Espresso() { description = "Espresso"; } @Override public double cost() { return 4; }}public class HouseBlend extends Beverage{ public HouseBlend() { description = "House Blend Coffee"; } @Override public double cost() { return 0.99; }}public class MilkDecorator extends CondimentDecorator{ public MilkDecorator(Beverage beverage) { this.beverage = beverage; } @Override public double cost() { return beverage.cost() + 0.8; } @Override public String getDescription() { return beverage.getDescription() + "with milk"; }}public class MochaDecorator extends CondimentDecorator{ public MochaDecorator(Beverage beverage) { this.beverage = beverage; } @Override public double cost() { return beverage.cost() + 0.5; } @Override public String getDescription() { return beverage.getDescription() + "with Mocha"; }}
@GetMapping("/decorator") public String decorator() { Beverage decafCoffee = new DecafCoffee(); log.info("before decoration" + decafCoffee.cost()); Beverage decafWithMocha = new MochaDecorator(decafCoffee); log.info("after decoration" + decafWithMocha.cost()); log.info("after decoration 2" + new MochaDecorator(new MochaDecorator(decafCoffee)).cost()); return "check the log"; }
/** * Observer other abilities */public interface IDisplay { void display();}/*** Observable interface, used to register/remove/notify observers */public interface IObservable { ActionResult registerObserver(IObserver observer); ActionResult removeObserver(IObserver observer); void notifyObservers();}/** * Observable will call update method for regiestered observer to update the status in observer side */public interface IObserver { void update(SharebleData data);@Data@RequiredArgsConstructor@AllArgsConstructorpublic class SharebleData { Double temperature; Double humidity; Double pressure;}public enum Status { SUCCESS, FAILURE}@Data@AllArgsConstructorpublic class ActionResult { @NonNull Status status; @Nullable List<String> errorReason;}
public class ObservableImpl implements IObservable{ private static List<IObserver> observerList; private SharebleData data; public ObservableImpl() { observerList = new ArrayList<>(); data = new SharebleData(); } public void setSharebleData(double tem, double humidity, double pressure) { data.setHumidity(humidity); data.setTemperature(tem); data.setPressure(pressure); notifyObservers(); } @Override public ActionResult registerObserver(IObserver observer) { observerList.add(observer); return new ActionResult(Status.SUCCESS, null); } @Override public ActionResult removeObserver(IObserver observer) { observerList.remove(observer); return new ActionResult(Status.SUCCESS, null); } @Override public void notifyObservers() { observerList.forEach(observer -> {observer.update(data);}); }}public class CurrentConditionDisplay implements IObserver, IDisplay{ private double temp; private double pressure; private IObservable observable; public CurrentConditionDisplay(IObservable subject) { this.observable = subject; observable.registerObserver(this); } @Override public void display() { System.out.println(String.format("======= print out current tem and pressure! temp: %f, pressure: %f ", temp, pressure)); } @Override public void update(SharebleData data) { temp = data.getTemperature(); pressure = data.getPressure(); display(); }}
@RestControllerpublic class TestController { @GetMapping("/observer") public String observer() { StringBuilder sb = new StringBuilder(); ObservableImpl observable = new ObservableImpl(); observable.setSharebleData(30, 0.7, 80); CurrentConditionDisplay currentConditionDisplay = new CurrentConditionDisplay(observable); observable.setSharebleData(31, 0.7, 80); observable.setSharebleData(32, 0.7, 80); observable.setSharebleData(33, 0.7, 80); return "Please check log"; }}// Output from console ======= print out current tem and pressure! temp: 31.000000, pressure: 80.000000 ======= print out current tem and pressure! temp: 32.000000, pressure: 80.000000 ======= print out current tem and pressure! temp: 33.000000, pressure: 80.000000
value
and transactionManager
propagation
REQUIRED
timeout
and timeoutString
readOnly
rollbackFor
and rollbackForClassName
noRollbackFor
and noRollbackForClassName
addStatementReportOperation
is using the serializeble level, which override the class level readonly transaction@Service@Transactional(readOnly = true)public class OperationService { @Transactional(isolation = Isolation.SERIALIZABLE) public boolean addStatementReportOperation( String statementFileName, long statementFileSize, int statementChecksum, OperationType reportType) { ... }}
Upon check, this happens in such scenario:
A prepared statement is generated in postgresql, but never stored in rails. Since the code was interrupted before storing the statement, the @counter variable was never incremented even though it was used to generate a prepared statement.
That pretty much described the issue, prepared statement on postgres side is a server side object that can be used to optimize performance. When the PREPARE
statement is executed, the specified statement is parsed, analyzed, and rewritten. When an EXECUTE
command is subsequently issued, the prepared statement is planned and executed.
When the identifiers already bound to existing prepared statements but rails does not realize it, this issue will be happened.
here is the fix
https://github.com/rails/rails/pull/41356/files
def next_key "a#{@counter + 1}"end def next_key "a#{@counter += 1}"end This change make the postgres prepared statement counter before makeing a prepared statementThus if the statemnt is aborted in rails side, app won't end up in perpetual crash state
equity grant agreements
types of equity
clock_gettime(CLOCK_MONOTONIC)
System.nanoTime()
clock_gettime(CLOCK_REALTIME)
System.currentTimeMillis()
Concurrency bugs are hard to find by testing, cause they are rare, and difficult to reproduce. For such reasons, databases have long tried to hide concurrency issues from application developers by providing transaction isolation
In theory, isolation should make your life easier by letting you pretend that no concurrency is happening: serializable isolation means that the database guarantees that transactions have the same effect as if they ran serially
In practice, isolation has a performance cost, and many databased don’t want to pay that price. Thus it’s common for systems to use weaker levels of isolation, which protect against some concurrency issues, but not all
Prevent dirty read, only committed record could be seen
reasons for dirty read prevention
Dirty writes — Row level lock
Dirty Reads —
still use row level lock
This harms the response time of read-only transactions and is bad for operability: a slowdown in one part of an application can have a knock-on effect in a completely different part of the application, due to waiting for locks.
most databases prevent dirty reads using the approach illustrated here
for every object that is written, the database remembers both the old committed value and the new value set by the transaction that currently holds the write lock.
While the transaction is ongoing, any other transactions that read the object are simply given the old value. Only when the new value is committed do transactions switch over to reading the new value.
Use write locks to prevent dirty writes
reads do not require any locks.
To implement snapshot isolation, db uses a generalization of the mechanism
If a database only needed to provide read committed isolation, but not snapshot isolation, it would be sufficient to keep two versions of an object:
However, storage engines that support snapshot isolation typically use MVCC for their read committed isolation level as well.
The lost update problem can occur if an application reads some value from the database, modifies it, and writes back the modified value (a read-modify-write cycle). If two transactions do this concurrently, one of the modifications can be lost, because the second write does not include the first modification. (We sometimes say that the later write clobbers the earlier write.) This pattern occurs in various different scenarios:
UPDATE counters SET value = value + 1 WHERE key = 'foo';
BEGIN TRANSACTION;SELECT * FROM figures WHERE name = 'robot' AND game_id = 222// For Update will let databse take a lock o FOR UPDATE; 1-- Check whether move is valid, then update the position-- of the piece that was returned by the previous SELECT.UPDATE figures SET position = 'c4' WHERE id = 1234;COMMIT;
UPDATE wiki_pages SET content = 'new content' WHERE id = 1234 AND content = 'old content';
In multi leader or leaderless replication system, a common approach in such replicated databases is to allow concurrent writes to create several conflicting versions of a value (also known as siblings), and to use application code or special data structures to resolve and merge these versions after the fact.
BEGIN TRANSACTION;SELECT * FROM doctors WHERE on_call = true AND shift_id = 1234 FOR UPDATE; 1UPDATE doctors SET on_call = false WHERE name = 'Alice' AND shift_id = 1234;COMMIT;
All of these examples follow a similar pattern:
A SELECT
query checks whether some requirement is satisfied by searching for rows that match some search condition (there are at least two doctors on call, there are no existing bookings for that room at that time, the position on the board doesn’t already have another figure on it, the username isn’t already taken, there is still money in the account).
Depending on the result of the first query, the application code decides how to continue (perhaps to go ahead with the operation, or perhaps to report an error to the user and abort).
If the application decides to go ahead, it makes a write (INSERT
, UPDATE
, or DELETE
) to the database and commits the transaction.
The effect of this write changes the precondition of the decision of step 2. In other words, if you were to repeat the SELECT
query from step 1 after committing the write, you would get a different result, because the write changed the set of rows matching the search condition (there is now one fewer doctor on call, the meeting room is now booked for that time, the position on the board is now taken by the figure that was moved, the username is now taken, there is now less money in the account).
The steps may occur in a different order. For example, you could first make the write, then the SELECT
query, and finally decide whether to abort or commit based on the result of the query.
In the case of the doctor on call example, the row being modified in step 3 was one of the rows returned in step 1, so we could make the transaction safe and avoid write skew by locking the rows in step 1 (SELECT FOR UPDATE
). However, the other four examples are different: they check for the absence of rows matching some search condition, and the write adds a row matching the same condition. If the query in step 1 doesn’t return any rows, SELECT FOR UPDATE
can’t attach locks to anything.
This effect, where a write in one transaction changes the result of a search query in another transaction, is called a phantom [3]. Snapshot isolation avoids phantoms in read-only queries, but in read-write transactions like the examples we discussed, phantoms can lead to particularly tricky cases of write skew.
SELECT * FROM bookings WHERE room_id = 123 AND end_time > '2018-01-01 12:00' AND start_time < '2018-01-01 13:00';
SELECT
query, it must acquire a shared-mode predicate lock on the conditions of the query. If another transaction B currently has an exclusive lock on any object matching those conditions, A must wait until B releases its lock before it is allowed to make its query.We could try to only execute one transaction at a time, in serial order, on a single thread
This comes to be realistic recently around 2007 because
This approach is implemented in VoltDB/ Hstore, Redis, and Datomic
Notice
Philosophy
IN the interactive style of transaction, network and db will take a lot time, we need to make sure we could handle enough throughput, we need to process multiple transactions concurrently in order to get reasonable performance.
Systems with single threaded serial transaction processing could receive the entier transaction code to db ahead of time , as a stored procedure
In order to prevent this anomaly, the database needs to track when a transaction ignores another transaction’s writes due to MVCC visibility rules. When the transaction wants to commit, the database checks whether any of the ignored writes have now been committed. If so, the transaction must be aborted.
When a transaction writes to the database, it must look in the indexes for any other transactions that have recently read the affected data. This process is similar to acquiring a write lock on the affected key range, but rather than blocking until the readers have committed, the lock acts as a tripwire: it simply notifies the transactions that the data they read may no longer be up to date.
Compared to two-phase locking, the big advantage of serializable snapshot isolation is that one transaction doesn’t need to block waiting for locks held by another transaction. Like under snapshot isolation, writers don’t block readers, and vice versa. This design principle makes query latency much more predictable and less variable. In particular, read-only queries can run on a consistent snapshot without requiring any locks, which is very appealing for read-heavy workloads.
]]>把握政府的真实意图,不能光读文件,还要看政府资金的流向和数量
事权与财力匹配 — 事权和支出责任匹配
实际情况
分税制改革以后,中央拿走了大头,但事情还是地方办,地方收支差距需要中央进行转移支付。全国总数上来看,能补得上。但总数不得上不代表每一级政府都能够补得上。
问题
解决方案
中央政府通过再分配 转移支付,支援中西部
政府依靠土地使用权转让收入支撑起土地财政,并将未来的土地收益资本化,从银行和其他渠道借入天量资金,利用土地金融,快速推动工业化和城市化,但也同时积累了大量债务
模式的关键
国家开发银行和城投债
地方债的治理
政府与具体的工业企业的合作
现代经济的规模经济效应非常强,新企业的进入门槛非常高,不仅投资额度大,还要面对先进入者已经积累起来的巨大成本和技术的优势
东亚经济奇迹,一个很重要的特点
新兴制造业在地理上的集聚效应是很强的,因为扎堆生产可以节约原材料和中间投入的运输成本
光伏产业
政府产业引导基金与私募基金
产业引导基金的特点
设置产业引导基金之后,也需要专门的公司来运营和管理这只基金
政府引导基金发展的外部条件
1994年分税制改革是很多重大经济现象的分水岭
地区房价差异的主要原因是供需失衡
中国对建设用地指标实行严格的管理,每年的新增指标由中央分配到省,再由省分配到地方。
房地产
中国情况
面临消费不足的问题
当前的靠政府 靠投资拉动经济发展会存在几个方面的问题
十九大提出了
出口占比高这种经济结构比较脆弱
反全球化,民粹主义,
中美之间的技术冲击,技术竞争才是真正的博弈
对于一个站在科技前沿的国家来说,新技术的发明和应用一般从科学研究和实验室开始,再到技术应用和专利阶段,然后再到大规模的工业量产
但是对于一个后起的发展中国家来说,很多时候顺序是反的
一个相互牵制的概念结构是软件实体必不可少的部分,它包括:数据集合,数据条码之间的关系,算法以及功能调用等。这些要素本身是抽象的,体现在不同的表现形式下的概念构造是相同的。
编程系统产品(Programming Systems Product) 开发的工作量是供个人使用的,独立开发的构件程序的9倍。估计软件构件产品化引起了3倍工作量,将软件构件整合成完整系统所需要的设计、集成和测试又加强了3倍的工作量,这些高成本的构件在根本上是相互独立的
编程行业满足了我们内心深处的创造渴望和愉悦所有人的共有情感,提供了5种乐趣
同样,这个行业有一些内在固有的烦恼
缺乏合理的时间进度是造成项目滞后的最主要原因,它比其他所有因素的总和影响还大
所有编程人员都是乐观主义者,一切都将运作良好
由于编程人员通过纯粹的思维活动来开发,我们期待在实现过程当中不会碰到困难。但是我们本身的构思是会有缺陷的,因此总会有bug
围绕着成本核算的估计技术,混淆了工作量和项目进展。人月是危险的,因为它暗示着人员数量和时间是可以相互替换的。
在若干人员中分解任务会引发额外的沟通工作量 — 培训和相互沟通。
Brooks法则: 为进度落后的项目增加人手,只会使得进度更加落后
同样有两年经验而且在受到同样培训的情况下,优秀的专业程序员的生产力是较差的程序员的10倍。
一位首席程序员,类似于外科手术队伍的团队架构提供了一种方法—— 既能获得由少数头脑产生的产品完整性,又能够得到多位协助人员的总体生产率,还彻底减少了沟通的工作量。
概念完整性是系统设计中最重要的考虑因素
功能和理解上的复杂程度的比值才是系统设计的最终测试标准,而不仅仅是丰富的功能
尽早交流和持续沟通能够使得结构师有较好的成本意识,使开发人员获得对设计的信心,并且不会混淆各自的责任分工
交流
项目工作手册
组织架构
在大型团队当中,各个小组倾向于不断的去做局部优化,来满足自己的目标,而比较少的考虑对用户的整体影响。这种方向性的问题是大型项目的主要危险
从系统整体出发和面向用户的态度是软体编程管理人员最重要的职能
文档的规范,目标,用户手册,内部文档,进度,预算,组织机构图,和工作空间分配
项目经理的基本职责是使每个人都向着相同的方向前进
项目经理的主要日常工作是沟通,而不是做出决定;文档使得各项计划和决策在整个团队范围内得到交流
用户的实际需要和用户感觉会随着程序构建,测试和使用而发生变化。
对于文档,需要采用定义良好的数字化版本将变更量子化
程序员不愿意为设计书写文档,不仅仅是因为惰性,更多的是源于设计人员的踌躇 —— 要为自己尝试性的设计决策进行辩解。
只要管理人员和技术人员的天赋允许,老板必须对他们的能力培养给予极大的关注,使得管理人员和技术人员具有互换性;特别是希望在技术和管理角色之间自由的分配人手的时候
具有两条晋升线的高效组织机构存在着一些社会性的障碍,人们必须警惕并积极的同它做持续的斗争
程序维护基本上不同于硬件的维护:主要由各种变更组成,入修复设计缺陷,新增功能,或者是使用环境或者配置变换引起的调整
对于一个广泛使用的程序,其维护总成本通常是开发成本的40%或者更多
Campbell指出了一个显示产品生命期中每月bug数的有趣曲线,其先是下降,后面是上升
每次修复之后,必须重新运行先前所有的测试用例,确保系统不会以更隐蔽的方式被破坏
所有的修改都倾向于破坏系统的架构,增加了系统的混乱程度(熵)。即使是最熟练的软件维护工作,也只是延缓了系统退化到不可修复的混乱状态的进程,以致必须重新进行设计。
项目经理应该制定一套策略,并为通用工具的开发分配资源,与此同时,还必须意识到专业工具的需求
调试是系统编程中较慢和较困难的部分,而漫长的调试周转时间是调试的祸根
在编写任何代码之前,规格说明必须提交给外部的测试人员,来详细的检查说明的完整性和明确性。开发人员自己无法完成这项工作。
开发大量的辅助测试平台和测试代码是很值得的,代码量甚至可能有测试对象的一半
项目是怎么样被延迟了整整一年的时间的….. 一次一天。一次一天的进度落后比重大灾难更难以识别,更不容易防范和更加难以弥补。
根据一个严格的进度表来控制大型项目的第一个步骤是制定进度表,进度表由里程碑和日期组成
慢性进度偏离是士气杀手。
状态的获取是困难的,因为下属经理有充分的理由不提供信息共享
老板的不良反应肯定会对信息的完全公开造成压制;相反,仔细区分状态报告,毫无惊慌地接收报告,决不越俎代庖,将能够鼓励诚实的汇报。
必须有评审机制,使得所有成员可以通过它了解真正的状态。出于这个目的,里程碑的进度和完成文档是关键。
程序修改人员所使用的文档中,除了描述事情如何,还应当阐述它为什么那样。对于加深理解,目的是非常关键的,即使是高级语言的语法,也不能表达目的
- State Machine Replication - active-active model where we keep a log of the incoming requests and each replica processes each request - **each machine will do real execution, do the logical stuff** ![State Machine Replication](https://s2.loli.net/2022/01/29/6JtLFNpmXBYW9wO.png)
Make all of an organization’s data easily available in all its storage and processing systems
Log gives a logical clock for each change against which all subscriber can be measured
Log also acts as a buffer that makes data production asynchronous from data consumption
Consumer only need to know about the log and not any details of the system of origin
What values most from author perspective
LinkedIn Went Through from O(N^2) to O(2N)
Data Warehouse
ETL
A better approach as ETL and Data Warehouse substitution
Have a central pipeline, the log, with a well defined API for adding data
Responsibility Classification
Producer of the data feed: integrating with this pipeline and providing a clean, well-structured data feed
Datawarehouse team now only care about loading structured feeds of data from the central log and carrying out transformation specific to their system
Computing derived data streams
How practical systems can be simplified with a log centric design
Log here is responsible for data flow, consistency and recovery
Directions
Possibility 1
Possibility 2
Possibility 3
Usage of log in system architecture
What mentioned above is actually a large portion of what a distributed data system does. left over is mainly related with client facing query API and indexing strategy
System Look
The client can get read-your-write semantics from any node by providing the timestamp of a write as part of its query—a serving node receiving such a query will compare the desired timestamp to its own index point and if necessary delay the request until it has indexed up to at least that time to avoid serving stale data.
For handling restoring failed nodes or moving partitions from node to node
Flyway first try to locate its schema history table
flyway_schema_history
Then flyway will begin scanning the filesystem or the classpath of the application for migrations
The migrations are then sorted based on the version number and applied in order
The schema history table will be updated accordingly as each migration gets applied
we use flyway migrate
to execute the migration
migrate
clean
info
validate
undo
baseline
repair
contains
repeatable migrations are re-applied every time their checksum changes
Very useful for managing database objects whose definition can then simply be maintained in a single file in version control
Repeatable migrations are always applied last, after all pending versioned migrations have been executed; always applied in the order of their description
.sql
For the case we need to execute same action over and over again
we could hook into its lifecycle
there are certain keywords we could use, and invoke them during the process
https://flywaydb.org/documentation/concepts/callbacks
Over the lifetime of a project, there would be tons of db objects be created/ destroyed across many migrations
How it works?
.proto
description of the data structuresyntax = "proto2";// starts with package delcaration // we should define this to get rid of name conflict package tutorial;// enable generating a separate .java file for each generated class option java_multiple_files = true;// specify in what java package name your generated classes should live// if not set here, it will simply match the pkg name given by the package declaration option java_package = "com.example.tutorial.protos";// define the class name of the wrapper class which will represent this file // if not given, it will be auto generated by converting the file name to upper camel case option java_outer_classname = "AddressBookProtos";/**Message Definition: An aggregate containing a set of typed fields Contain certain standard types + boo1 + int32 + float + double + string we could also add further structure to msgs by using other msg types as field types + marker + identify the unique tag field use in binary encoding + try to use 1 - 15 as it neeeds one less byte+ modifier + optional + field may or may not be set + if not, a default value will be used + we could set our own default values + or system will provide defaults + numeric types -- zero + strings -- empty string + bools -- false + embedded messages -- default instance or prototype of the message, which has none of its fields set + repeated + the field may be repeated any number of times [0, xxx) + order will be preserved in the protocol buffer + act like a dynamic sized array + required + a value for the field must be provided + try to build an uninitialized msg will throw runtime exception + parse an uninitialzied msg will throw IOException + required is not favored as it cannot be backward compatible */message Person { // =1 marker identify the unique tag that field uses in the binary encoding optional string name = 1; optional int32 id = 2; optional string email = 3; enum PhoneType { MOBILE = 0; HOME = 1; WORK = 2; } message PhoneNumber { optional string number = 1; optional PhoneType type = 2 [default = HOME]; } repeated PhoneNumber phones = 4;}message AddressBook { repeated Person people = 1;}
.proto
protoc -I=$SRC_DIR --java_out=$DST_DIR $SRC_DIR/addressbook.proto
clear
method to set the field back to its empty stateisInitialized
check if all the required fields have been settoString
returns a human readable representation of the msgmergeFrom(Message other)
merge the contents of other into this msg, overwrite singular scalar fieldsclear
clear all the fields back to the empty statebyte[] toByteArray();
static xxx parseFrom(byte[] data);
void writeTo(OutputStream output);
static xxx parseFrom(InputStream input);
message Foo { reserved 2, 15, 9 to 11; reserved "foo", "bar";}
.java
file with a class for each message type, as well as Builder classes for creating message class instances.proto
fileLanguage Guide (proto3) | Protocol Buffers | Google Developers
message SearchResponse { message Result { string url = 1; string title = 2; repeated string snippets = 3; } repeated Result results = 1;}// to use the msg type outside its parent message type message SomeOtherMessage { SearchResponse.Result result = 1;}
OBSOLETE_
Any
map<key_type, value_type> map_field = N;
.proto
fileOptions do not change the overall meaning of a declaration, but may affect the way it is handled in a particular context.
java_package
java_outer_classname
java_multiple_files
optimize_for
SPEED
CODE_SIZE
LITE_RUNTIME
Bazel is a build and test tool built that supports building and testing multiple projects for multiple languages and build outputs
What
Why
How
bazel build //...
and bazel test //...
workspace/.bazelrc
Refer https://docs.bazel.build/versions/1.2.0/tutorial/java.html
build rule tells bazel how to build the desired outputs, executable binaries or libraries
bazel build //:ProjectRunner
//
part is the location of our BUILD file relative to the root of the workspaceProjectRunner
is the target name we define in the BUILD filewe could review our dependency graph by using
bazel query --notool_deps --noimplicit_deps "deps(//:ProjectRunner)" --output graph
// generate graph for class in use, and output as a svg file bazel query --notool_deps --noimplicit_deps "deps(//booking)" --output graph > /Users/lchen1/Documents/bookingGraph.in dot -Tsvg < bookingGraph.in > graph.svg
Package Splits
for larger project, we may want to split into multiple targets and packages to allow for fast incremental builds, this could also speed up builds by building multiple parts of a project at once
java_binary( name = "ProjectRunner", srcs = ["src/main/java/com/example/ProjectRunner.java"], main_class = "com.example.ProjectRunner", deps = [":greeter"],)java_library( name = "greeter", srcs = ["src/main/java/com/example/Greeting.java"],)
java_binary( name = "runner", srcs = ["Runner.java"], main_class = "com.example.cmdline.Runner", deps = ["//:greeter"])
java_library( name = "greeter", srcs = ["src/main/java/com/example/Greeting.java"], visibility = ["//src/main/java/com/example/cmdline:__pkg__"], )
//:ProjectRunner
//path/to/package:target-name
//
workspace root identifier and just use :target_name
java_binary( // target name name = "mymain", // all source files, passed as glob, inside the fully qualified directory names on classpath srcs = glob(["src/main/java/com/abhi/*.java"]), // main runner class main_class = "com.abhi.MyMain", // dependent classes/ interfaces to be included, not part of srcs deps = ["//another-dir:animal"])
java_library( name = "animal", srcs = ["src/main/java/com/abhi/Animal.java"], // if other class is implemented in a different pkg, it has to be visible to main-dir visibility = ["//main-dir:__pkg__"])
bazel build //main-dir:mymain